Wayback Proxy Lets Your Browser Party Like It’s 1999

This project is a few years old, but it might be appropriate to cover it late since [richardg867]’s Wayback Proxy is, quite literally, timeless.

It does, more-or-less, what it says as on the tin: it is an HTTP proxy that retrieves pages from the Internet Archive’s Wayback Machine, or the Oocities archive of old Geocities sites. (Remember Geocities?) It is meant to sit on a Raspberry Pi or similar SBC between you and the modern internet. A line in a config file lets you specify the exact date. We found this via YouTube in a video by [The Science Elf] (embedded below, for those of you who don’t despise YouTube) in which he attaches a small screen and dial to his Pi to create what he calls the “Internet Time Machine” using the Wayback Proxy. (Sadly [The Science Elf] did not see fit to share his work, but it would not be difficult to recreate the python script that edits config.json.)

What’s the point? Well, if you have a retro-computer from the late 90s or early 2000s, you’re missing out a key part of the vintage experience without access to the vintage internet. This was the era when desktops were being advertised as made to get you “Online”. Using Wayback Proxy lets you relive those halcyon days– or live them for the first time, for the younger set. At least relive those of which parts of the old internet which could be Archived, which sadly isn’t everything. Still, for a nostalgia trip, or a living history exhibit to show the kids? It sounds delightful.

Of course it is possible to hit up the modern web on a retro PC (or on a Mac Plus). As long as you’re not caught up in an internet outage, as this author recently was.

Continue reading “Wayback Proxy Lets Your Browser Party Like It’s 1999”

A Quick Introduction To TCP Congestion Control

It’s hard to imagine now, but in the mid-1980s, the Internet came close to collapsing due to the number of users congesting its networks. Computers would request packets as quickly as they could, and when a router failed to process a packet in time, the transmitting computer would immediately request it again. This tended to result in an unintentional denial-of-service, and was degrading performance significantly. [Navek]’s recent video goes over TCP congestion control, the solution to this problem which allows our much larger modern internet to work.

In a 1987 paper, Van Jacobson described a method to restrain congestion: in a TCP connection, each side of the exchange estimates how much data it can have in transit (sent, but not yet acknowledged) at any given time. The sender and receiver exchange their estimates, and use the smaller estimate as the congestion window. Every time a packet is successfully delivered across the connection, the size of the window doubles.

Once packets start dropping, the sender and receiver divide the size of the window, then slowly and linearly ramp up the size of the window until it again starts dropping packets. This is called additive increase/multiplicative decrease, and the overall result is that the size of the window hovers somewhere around the limit. Any time congestion starts to occur, the computers back off. One way to visualize this is to look at a graph of download speed: the process of periodically hitting and cutting back from the congestion limit tends to create a sawtooth wave.

[Navek] notes that this algorithm has rather harsh behavior, and that there are new algorithms that both recover faster from hitting the congestion limit and take longer to reach it. The overall concept, though, remains in widespread use.

If you’re interested in reading more, we’ve previously covered network congestion control in more detail. We’ve also covered [Navek]’s previous video on IPV5. Continue reading “A Quick Introduction To TCP Congestion Control”

Remembering The ISP That David Bowie Ran For Eight Years

The seeds of the Internet were first sown in the late 1960s, with computers laced together in continent-spanning networks to aid in national defence. However, it was in the late 1990s that the end-user explosion took place, as everyday people flocked online in droves.

Many astute individuals saw the potential at the time, and rushed to establish their own ISPs to capitalize on the burgeoning market. Amongst them was a famous figure of some repute. David Bowie might have been best known for his cast of rock-and-roll characters and number one singles, but he was also an internet entrepreneur who got in on the ground floor—with BowieNet.

Continue reading “Remembering The ISP That David Bowie Ran For Eight Years”

Which Browser Should I Use In 2025?

Over the history of the Web, we have seen several major shifts in browsing software. If you’re old enough to have used NCSA Mosaic or any of the other early browsers, you probably welcomed the arrival of Netscape Navigator, and rued its decline in the face of Internet Explorer. As Mozilla and then Firefox rose from Netscape’s corpse the domination by Microsoft seemed inevitable, but then along came Safari and then Chrome.

For a glorious while there was genuine competition between browser heavyweights, but over the last decade we’ve arrived at a point where Chrome and its associated Google domination is the only game in town. Other players are small, and the people behind Firefox seem hell-bent on fleeing to the Dark Side, so where should we turn? Is there a privacy-centric open source browser that follows web standards and doesn’t come with any unfortunate baggage in the room? It’s time to find out. Continue reading “Which Browser Should I Use In 2025?”

BritCSS: Write CSS With British English Spellings

Everyone knows that there is only one proper English, with the rest being mere derivatives that bastardize the spelling and grammar. Despite this, the hoodlums who staged a violent uprising against British rule in the American colonies have somehow made their uncouth dialect dominant in the information technologies that have taken the world by storm these past decades. In this urgent mission to restore the King’s English to its rightful place, we fortunately have patriotic British citizens who have taken it upon themselves to correct this grave injustice. Brave citizens such as [Declan Chidlow], whose BritCSS project is a bright beacon in these harrowing times.

Implemented as a simple, 14 kB JavaScript script to be included in an HTML page, it allows one to write CSS files using proper spelling, such as background-colour and centre. Meanwhile harsh language such as !important is replaced with the more pleasant !if-you-would-be-so-kind. It is expected that although for now this script has to be included on each page to use BritCSS, native support will soon be implemented in every browser, superseding the US dialect version. [Declan] has also been recommended to be awarded the Order of the British Empire for his outstanding services.

Unhacked Mattress Phones Home

[Dylan] has a fancy bed that can be set to any temperature. Apparently this set him back about $2,000, it only works if it has Internet, and the bed wants $19 a month for anything beyond basic features. Unsurprisingly, [Dylan] decided to try to hack the mattress firmware and share what he learned with us.

Oddly enough, it was easy to just ask the update URL for the firmware and download it. Inside, it turned out there was a mechanism for “eng@eightsleep.com” to remotely SSH into any bed and — well — do just about anything. You may wonder why anyone wants to gain control of your bed. But if you are on the network, this could be a perfect place to launch an attack on the network and beyond.

Of course, they can also figure out when you sleep, if you sleep alone or not, and, of course, when no one is in the bed. But if those things bother you, maybe don’t get an Internet-connected bed.

Oddly enough, the last time we saw a bed hack, it was from [Dillan], not [Dylan]. Just because you don’t want Big Sleep to know when you are in bed doesn’t mean it isn’t useful for your private purposes.

Trap Naughty Web Crawlers In Digestive Juices With Nepenthes

In the olden days of the WWW you could just put a robots.txt file in the root of your website and crawling bots from search engines and kin would (generally) respect the rules in it. These days, however, we have especially web crawlers from large language model (LLM) companies happily ignoring such signs on the lawn before proceeding to hover up every scrap of content on websites. Naturally this makes a lot of people very angry, but what can you do about it? The answer by [Aaron B] is Nepenthes, described on the project page as a ‘tar pit for catching web crawlers’.

More commonly known as ‘pitcher plants’, nepenthes is a genus of carnivorous plants that use a fluid-filled cup to trap insects and small critters unfortunate enough to slip & slide down into it. In the case of this Lua-based project the idea is roughly the same. Configured as a trap behind a web server (e.g. /nepenthes), any web crawler that accesses it will be presented with an endless number of (randomly generated) pages with many URLs to follow. Page generating is deliberately quite slow to not soak up significant CPU time, while still giving the LLM scrapers plenty of random nonsense to chew on.

Considering that these web crawlers deemed adhering to the friendly sign on the lawn beneath them, the least we can do in response, is to hasten model collapse by feeding these LLM scrapers whatever rolls out of a simple (optionally Markov-based) text generator.