What is DNS-over-TLS and DNS-over-HTTPS
Subverting private DNS for ad blocking
Other uses for AdGuard DNS
Going further
If you have to choose one year when you won't fly, this year, 2020, is the one to choose. Why? Because CORSIA.
CORSIA is not a novel virus, but "Carbon Offsetting and Reduction Scheme for International Aviation". In a nutshell, the aviation industry says they will freeze their co2 emissions from growing. Actually, aviation emissions are still going to grow. The airlines will just pay someone else to reduce emissions with the same amount aviation emissions rise - the "Offsetting" word in CORSIA. If that sounds like greenwashing, well it pretty much is. But that was expected. Getting every country and airline abroad CORSIA would not have been possible if the scheme would actually bite. So it's pretty much a joke.
The first phase of CORSIA will start next year, so the emissions are frozen to year 2020 levels. Due to certain recent events, lots of flights have already been cancelled - which means the reference year aviation emissions are already a lot less than the aviation industry was expecting. By avoiding flying this year, the aviation emissions are going to be frozen at an even lower level. This will increase cost of co2 offsetting for airlines, and the joke is going to be on them.
So consider skipping business travel and taking your holiday trip this year with something else than a plane. Wouldn't recommend a cruise ship, tho...
We need to turn tables around. If they want something impossible, it should be upto them to implement it.
It is simply unfair to require each online provider to implement an AI to detect copyright infringement, manage a database of copyrighted content and pay for the costs running it all.. ..And getting slapped with a lawsuit anyways, since copyrighted content is still slipping through.
The burden of implementing #uploadfilter should be on the copyright holder organizations. Implement as a SaaS. Youtube other web platforms call your API and pay $0.01 each time a pirate content is detected. On the other side, to ensure correctness of the filter, copyright holders have to pay any lost revenue, court costs and so on for each false positive.
Filtering uploads is still problematic. But it's now the copyright holders problem. Instead people blaming web companies for poor filters, it's the copyright holders now who have to answer to the public why their filters are rejecting content that doesn't belong to them.
This ignores the reality where majority of developers do cross-platform development every day. They develop on Mac and Windows PC's and deploy on Linux servers or mobile phones. The two biggest Linux success stories, cloud and Android, are built on cross-platform development. Yes, cross-platform development sucks. But it's just one of the many things that sucks in software development.
More importantly, the ship of "local dev enviroment" has long since sailed. Using Linus's other great innovation, git, developers push their code to a Microsoft server, which triggers a Rube Goldberg machine of software build, container assembly, unit tests, deployment to test environment and so on - all in cloud servers.
Yes, the ability to easily by a cheap whitebox PC from CompUSA was the important factor in making X86 dominate server space. But people get cheap servers from cloud now, and even that is getting out of fashion. Services like AWS lambda abstract the whole server away, and the instruction set becomes irrelevant. Which CPU and architecture will be used to run these "serverless" services is not going to depend on developers having Arm Linux Desktop PC's.
Of course there are still plenty of people like me who use Linux Desktop and run things locally. But in the big picture things are just going one way. The way where it gets easier to test things in your git-based CI loop rather than in local development setup.
But like Linus, I still do want to see an powerful PC-like Arm NUC or Laptop. One that could run mainline Linux kernel and offer a PC-like desktop experience. Not because ARM depends on it to succeed in server space (what it needs is out of scope for this blogpost) - but because PC's are useful in their own.
... processor : 7 BogoMIPS : 2.40 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 3Or maybe like:
$ cat /proc/cpuinfo processor : 0 model name : ARMv7 Processor rev 2 (v7l) BogoMIPS : 50.00 Features : half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae CPU implementer : 0x56 CPU architecture: 7 CPU variant : 0x2 CPU part : 0x584 CPU revision : 2 ...The bits "CPU implementer" and "CPU part" could be mapped to human understandable strings. But the Kernel developers are heavily against the idea. Therefor, to the next idea: Parse in userspace. Turns out, there is a common tool almost everyone has installed does similar stuff. lscpu(1) from util-linux. So I proposed a patch to do ID mapping on arm/arm64 to util-linux, and it was accepted! So using lscpu from util-linux 2.32 (hopefully to be released soon) the above two systems look like:
Architecture: aarch64 Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 2 NUMA node(s): 1 Vendor ID: ARM Model: 3 Model name: Cortex-A53 Stepping: r0p3 CPU max MHz: 1200.0000 CPU min MHz: 208.0000 BogoMIPS: 2.40 L1d cache: unknown size L1i cache: unknown size L2 cache: unknown size NUMA node0 CPU(s): 0-7 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuidAnd
$ lscpu Architecture: armv7l Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Vendor ID: Marvell Model: 2 Model name: PJ4B-MP Stepping: 0x2 CPU max MHz: 1333.0000 CPU min MHz: 666.5000 BogoMIPS: 50.00 Flags: half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpaeAs we can see, lscpu is quite versatile and can show more information than just what is available in cpuinfo.
$ apt-cache search cross-build-essential crossbuild-essential-arm64 - Informational list of cross-build-essential packages for crossbuild-essential-armel - ... crossbuild-essential-armhf - ... crossbuild-essential-mipsel - ... crossbuild-essential-powerpc - ... crossbuild-essential-ppc64el - ... ⏎Lets have a quick exact steps guide. But first - while you can use do all this in your desktop PC rootfs, it is more wise to contain yourself. Fortunately, Debian comes with a container tool out of box:
sudo debootstrap stretch /var/lib/container/stretch http://deb.debian.org/debian echo "strech_cross" | sudo tee /var/lib/container/stretch/etc/debian_chroot sudo systemd-nspawn -D /var/lib/container/stretchThen we set up cross-building enviroment for arm64 inside the container:
# Tell dpkg we can install arm64 dpkg --add-architecture arm64 # Add src line to make "apt-get source" work echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list apt-get update # Install cross-compiler and other essential build tools apt install --no-install-recommends build-essential crossbuild-essential-arm64Now we have a nice build enviroment, lets choose something more complicated than the usual kernel/BusyBox to cross-build, qemu:
# Get qemu sources from debian apt-get source qemu cd qemu-* # New in stretch: build-dep works in unpacked source tree apt-get build-dep -a arm64 . # Cross-build Qemu for arm64 dpkg-buildpackage -aarm64 -j6 -bNow that works perfectly for Qemu. For other packages, challenges may appear. For example you may have to se "nocheck" flag to skip build-time unit tests. Or some of the build-dependencies may not be multiarch-enabled. So work continues :)
*How useful are virtual packages anymore? "foo-defaults" packages seem to be the go-to solution for most real usecases anyways.
fte (0.44-1) unstable; urgency=low * initial Release. -- Riku VoipioWelp I seem to have spent holidays of 1996 doing my first Debian package. The process of getting a package into Debian was quite straightforward then. "I have packaged fte, here is my pgp, can I has an account to upload stuff to Debian?" I think the bureaucracy took until second week of January until I could actually upload the created package.Wed, 25 Dec 1996 20:41:34 +0200
uid Riku VoipioA few months after joining, someone figured out that to pgp signatures to be useful, keys need to be cross-signed. Hence young me taking a long bus trip from countryside Finland to the capital Helsinki to meet the only other DD in Finland in a cafe. It would still take another two years until I met more Debian people, and it could be proven that I'm not just an alter ego of Lars ;) Much later an alternative process of phone-calling prospective DD's would be added.sig 89A7BF01 1996-12-15 Riku Voipio sig 4CBA92D1 1997-02-24 Lars Wirzenius
sudo apt install -y qemu qemu-utils cloud-utils wget https://releases.linaro.org/components/kernel/uefi-linaro/15.12/release/qemu64/QEMU_EFI.fd wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-arm64-uefi1.imgCloud images are plain - there is no user setup, no default user/pw combo, so to log in to the image, we need to customize the image on first boot. The defacto tool for this is cloud-init. The simplest method for using cloud-init is passing a block media with a settings file - of course for real cloud deployment, you would use one of fancy network based initialization protocols cloud-init supports. Enter the following to a file, say cloud.txt:
#cloud-config users: - name: you ssh-authorized-keys: - ssh-rsa AAAAB3Nz.... sudo: ['ALL=(ALL) NOPASSWD:ALL'] groups: sudo shell: /bin/bashThis minimal config will just set you a user with ssh key. A more complex setup can install packages, write files and run arbitrary commands on first boot. In professional setups, you would most likely end up using cloud-init only to start Ansible or another configuration management tool.
cloud-localds cloud.img cloud.txt qemu-system-aarch64 -smp 2 -m 1024 -M virt -bios QEMU_EFI.fd -nographic \ -device virtio-blk-device,drive=image \ -drive if=none,id=image,file=xenial-server-cloudimg-arm64-uefi1.img \ -device virtio-blk-device,drive=cloud \ -drive if=none,id=cloud,file=cloud.img \ -netdev user,id=user0 -device virtio-net-device,netdev=user0 -redir tcp:2222::22 \ -enable-kvm -cpu hostIf you are on an X86 host and want to use qemu to run an aarch64 image, replace the last line with "-cpu cortex-a57". Now, since the example uses user networking with tcp port redirect, you can ssh into the VM:
ssh -p 2222 you@localhost Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-22-generic aarch64) ....
Since I've now been using Linux for 20 years, I've dug up some artifacts from the early journey.
* Wait no, I don't miss those times at all
# arndale 7004:telnet:0:'/dev/serial/by-path/pci-0000:00:1d.0-usb-0:1.8.1:1.0-port0':115200 8DATABITS NONE 1STOPBIT # cubox 7005:telnet:0:/dev/serial/by-id/usb-Prolific_Technology_Inc._USB-Serial_Controller_D-if00-port0:115200 8DATABITS NONE 1STOPBIT # sonic-screwdriver 7006:telnet:0:/dev/serial/by-id/usb-FTDI_FT230X_96Boards_Console_DAZ0KA02-if00-port0:115200 8DATABITS NONE 1STOPBITThe by-path syntax is needed, if you have many identical usb-to-serial adapters. In that case a Patch from BTS is needed to support quoting in serial path. Ser2net doesn't seems very actively maintained upstream - a sure sign that project is stagnant is a homepage still at sourceforge.net... This patch among other interesting features can be also be found in various ser2net forks in github.
# Local services arndale 7004/tcp cubox 7005/tcp sonic-screwdriver 7006/tcpNow finally:
telnet localhost sonic-screwdriver^Mandatory picture of serial port connection in action
Scaleway started selling ARM based hosted server in April. I've intended to blog about this for a while, since it was time to upgrade from wheezy to jessie was timely, why not switch provider from an X86 based to ARM one at the same time?
In many ways scaleway node is opposite to what "Enterprise ARM" people are working on. Each server is based on an oldish ARMv7 Quad-Core Marvell Armada XP, instead of a brand new 64-bit ARMv8 cpu. There is no UEFI, ACPI or any other "industry standards" involved, just a smooth web interface and a command line tool to manage your node(s). And the node is yours, it's not shared with others with virtualization. The picture above is a single node, which is stacked with 911 other nodes into a single rack.
This week, the C1 price was dropped to a very reasonable €2.99 per month, or €0.006 per hour.
The performance is more than enough for my needs - shell, email and light web serving. dovecot, postfix, irssi and apache2 are just an apt-get away. Anyone who says you need x86 for Linux servers is forgetting that Linux software is open source, and if not already available, can be compiled to any architecture with little effort. Thus the migration pains were only because I chose to modernize configuration of dovecot and friends. Details of the new setup shall be left for another post.
I can see two ways out. The easy way is to get IoT gadgets as monthly paid service. Now the gadget vendor has the right incentive - instead of trying to convince me to buy their next gadget, their incentive is to keep me happily paying the monthly bill. The polar opposite is to start making open competing IoT's, and market to people the advantage of being yourself in control. I can see markets for both options. But half-way between is just pure dystopy.
... This needs to be compiled into a .dtb. I found the easiest way was just to drop the patched .dts into an unpacked kernel tree and then running make dtbs.There are easier ways. For example, you can get the current device tree file generated from /proc:
apt-get install device-tree-compiler dtc -I fs -O dts -o current.dts /proc/device-tree/(Why /proc and not /sys ? because device tree predates /sys) Now you can just modify and build the dtb again, and install it back to where bootloader reads the dtb from:
vim current.dts dtc -I dts -O dtb -o new.dtb current.dtsAlternative, of course, is to build a brand new mainline kernel and use the dynamic Device tree code now available.
1. Install Jessie into kvm
kvm -m 2048 -drive file=lava2.img,if=virtio -cdrom debian-testing-amd64-netinst.iso2. Install lava-server
apt-get update; apt-get install -y postgresql nfs-kernel-server apache2 apt-get install lava-server # answer debconf questions a2dissite 000-default && a2ensite lava-server.conf service apache2 reload lava-server manage createsuperuser --username default --email=foo.bar@example.com $EDITOR /etc/lava-dispatcher/lava-dispatcher.conf # make sure LAVA_SERVER_IP is rightThat's the generic setup. Now you can point your browser to the IP address of the kvm machine, and log in with the default user and the password you made.
3 ... 1000 Each LAVA instance is site customized for the boards, network, serial ports, etc. In this example, I now add a single arndale board.
cp /usr/lib/python2.7/dist-packages/lava_dispatcher/default-config/lava-dispatcher/device-types/arndale.conf /etc/lava-dispatcher/device-types/ sudo /usr/share/lava-server/add_device.py -s arndale arndale-01 -t 7001This generates us a almost usable config for the arndale. For site specifics I have usb-to-serial. Outside kvm, I provide access to serial ports using the following ser2net config:
7001:telnet:0:/dev/ttyUSB0:115200 8DATABITS NONE 1STOPBIT 7002:telnet:0:/dev/ttyUSB1:115200 8DATABITS NONE 1STOPBITTODO: make ser2net not run as root and ensure usb2serial devices always get same name..
For automatic power reset, I wanted something cheap, yet something that wouldn't require much soldering (I'm not a real embedded engineer.. I prefer software side ;) . Discussed with Hector, who hinted about prebuilt relay boxes. Chose one from Ebay, a kmtronic 8-port USB Relay. So now I have this cute boxed nonsense hack.
The USB relay is driven with a short script, hard-reset-1
stty -F /dev/ttyACM0 9600 echo -e '\xFF\x01\x00' > /dev/ttyACM0 sleep 1 echo -e '\xFF\x01\x01' > /dev/ttyACM0Sidenote: If you don't have or want automated power relay for lava, you can always replace this this script with something along "mpg123 puny_human_press_the_power_button_now.mp3"
Both the serial port and reset script are on server with dns name aimless. So we take the /etc/lava-dispatcher/devices/arndale-01.conf that add_device.py created and make it look like:
device_type = arndale hostname = arndale-01 connection_command = telnet aimless 7001 hard_reset_command = slogin lava@aimless -i /etc/lava-dispatcher/id_rsa /home/lava/hard-reset-1Since in my case I'm only going to test with tftp/nfs boot, the arndale board needs only to be setup to have a u-boot bootloader ready on power-on.
Now everything is ready for a test job. I have a locally built kernel and device tree, and I export the directory using the httpd available by default in debian.. Python!
cd out/ python -m SimpleHTTPServerGo to the lava web server, select api->tokens and create a new token. Next we add the token and use it to submit a job
$ sudo apt-get install lava-tool $ lava-tool auth-add http://default@lava-server/RPC2/ $ lava-tool submit-job http://default@lava-server/RPC2/ lava_test.json submitted as job id: 1 $The first job should now be visible in the lava web frontend, in the scheduler -> jobs part. If everything goes fine, the relay will click in a moment and the job will finish in a few minutes.
For background, qemu/kvm support a few ways to provide network to guests. The default is user networking, which requires no privileges, but is slow and based on ancient SLIRP code. The other common option is tap networking, which is fast, but complicated to set up. Turns out, with networkd and qemu bridge helper, tap is easy to set up.
$ for file in /etc/systemd/network/*; do echo $file; cat $file; done /etc/systemd/network/eth.network [Match] Name=eth1 [Network] Bridge=br0 /etc/systemd/network/kvm.netdev [NetDev] Name=br0 Kind=bridge /etc/systemd/network/kvm.network [Match] Name=br0 [Network] DHCP=yesDiverging from Joachims simple example, we replaced "DHCP=yes" with "Bridge=br0". Then we proceed to define the bridge (in the kvm.netdev) and give it an ip via dhcp in kvm.network. From the kvm side, if you haven't used the bridge helper before, you need to give the helper permissions (setuid root or cap_net_admin) to create a tap device to attach on the bridge. The helper needs an configuration file to tell what bridge it may meddle with.
# cat > /etc/qemu/bridge.conf <<__END__ allow br0 __END__ # setcap cap_net_admin=ep /usr/lib/qemu/qemu-bridge-helperNow we can start kvm with bridge networking as easily as with user networking:
$ kvm -m 2048 -drive file=jessie.img,if=virtio -net bridge -net nic,model=virtio -serial stdioThe manpages systemd.network(5) and systemd.netdev(5) do a great job explaining the files. Qemu/kvm networking docs are unfortunately not as detailed.
$ http://releases.linaro.org/14.07/openembedded/aarch64/Image $ http://releases.linaro.org/14.07/openembedded/aarch64/vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img.gz $ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt \ -kernel Image -append 'root=/dev/vda2 rw rootwait mem=1024M console=ttyAMA0,38400n8' \ -drive if=none,id=image,file=vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img \ -netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image [ 0.000000] Linux version 3.16.0-1-linaro-vexpress64 (buildslave@x86-64-07) (gcc version 4.8.3 20140401 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.04 - Linaro GCC 4.8-2014.04) ) #1ubuntu1~ci+140726114341 SMP PREEMPT Sat Jul 26 11:44:27 UTC 20 [ 0.000000] CPU: AArch64 Processor [411fd070] revision 0 ... root@genericarmv8:~#Quick benchmarking with age-old ByteMark nbench:
Index | Qemu | Foundation | Host |
---|---|---|---|
Memory | 4.294 | 0.712 | 44.534 |
Integer | 6.270 | 0.686 | 41.983 |
Float | 1.463 | 1.065 | 59.528 |
Baseline (LINUX) : AMD K6/233* |
$ sudo apt-get install qemu-system-arm $ wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-arm64-disk1.img $ wget https://cloud-images.ubuntu.com/trusty/current/unpacked/trusty-server-cloudimg-arm64-vmlinuz-generic $ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt -kernel trusty-server-cloudimg-arm64-vmlinuz-generic \ -append 'root=/dev/vda1 rw rootwait mem=1024M console=ttyAMA0,38400n8 init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring' \ -drive if=none,id=image,file=trusty-server-cloudimg-arm64-disk1.img \ -netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image [ 0.000000] Linux version 3.13.0-32-generic (buildd@beebe) (gcc version 4.8.2 (Ubuntu/Linaro 4.8.2-19ubuntu1) ) #57-Ubuntu SMP Tue Jul 15 03:52:14 UTC 2014 (Ubuntu 3.13.0-32.57-generic 3.13.11.4) [ 0.000000] CPU: AArch64 Processor [411fd070] revision 0 ... -snip- ... ubuntu@ubuntu:~$ cat /proc/cpuinfo Processor : AArch64 Processor rev 0 (aarch64) processor : 0 Features : fp asimd evtstrm CPU implementer : 0x41 CPU architecture: AArch64 CPU variant : 0x1 CPU part : 0xd07 CPU revision : 0 Hardware : linux,dummy-virt ubuntu@ubuntu:~$The "init=/usr/lib/cloud-init/uncloud-init ds=nocloud ubuntu-pass=randomstring" is ubuntu cloud stuff that will set the ubuntu user password to "randomstring" - don't use "randomstring" literally there, if you are connected to internets...
For more detailed writeup of using qemu-system-aarch64, check the excellent writeup from Alex Bennee.
The speed increase provided by MV78460 can viewed by comparing build times on selected builds since early april:
We can now build Qemu in 2h instead of 16h -8x faster than before! Certainly a substantial improvement, so impressive kit from Marvell! But not all packages gain this amount of speedup:
This example, webkitgtk, builds barely 3x faster. The explanation is found from debian/rules of webkitgkt:
# Parallel builds are unstable, see #714072 and #722520 # ifneq (,$(filter parallel=%,$(DEB_BUILD_OPTIONS))) # NUMJOBS = $(patsubst parallel=%,%,$(filter parallel=%,$(DEB_BUILD_OPTIONS))) # MAKEARGUMENTS += -j$(NUMJOBS) # endifThe old builders are single-core[1], so the regardless of parallel building, you can easily max out the cpu. New builders will use only 1 of 4 cores without parallel build support in debian/rules.
During this buildd cpu usage graph, we see most time only one CPU is consumed. So for fast package build times.. make sure your packages supports parallel building.
For developers, abel.debian.org is porter machine with Armada XP. It has schroot's for both armel and armhf. set "DEB_BUILD_OPTIONS=parallel=4" and off you go.
Finally I'd like to thank Thomas Petazzoni, Maen Suleiman, Hector Oron, Steve McIntyre, Adam Conrad and Jon Ward for making the upgrade happen.
Meanwhile, we have unrelated trouble - a bunch of disks have broken within a few days apart. I take the warranty just run out...
[1] only from Linux's point of view. - mv78200 has actually 2 cores, just not SMP or coherent. You could run an RTOS on the other core while you run Linux on the other.