Sunday, November 29, 2020

Modern console, old protocol

Not much, just a note someone put a Gopher client on the Nintendo Switch. It may well be the first homebrew browser for that platform. Seems fitting.

Thursday, November 26, 2020

Not a refurb weekend: Apple IIgs

The most advanced Apple II computer was, of course, the one Apple insisted would be a dead end: the 1986-1992 Apple IIGS. And the reason was simply that the GS was too good a computer, offering Apple II compatibility, 16-bit performance, a 4096-colour palette with up to 640x200 high resolution graphics, and one of the best sound chips since the SID in the Commodore 64 (in fact, Bob Yannes designed the SID, then went on to found Ensoniq who produced the GS'). It was so good, in fact, that it posed a real risk during its development of cannibalizing sales from the Macintosh. Apple thus made sure it couldn't be a threat by deliberately hobbling the 16-bit 65816 CPU to just 2.8MHz when it was capable of up to 14MHz and at minimal additional cost, and intentionally never did more with the system publicly other than token ROM upgrades and pathetic bumps in memory. It takes some truly inspired buttheadery to hold back a fabulous system in that fashion, but that's what Apple was capable of in those days, and arguably still is.

This particular computer in my collection — and yes, that's a Canon Cat next to it, a topic for another day — is a Frankenstein assembled from three partially working units. It has a ROM 03 motherboard (the last revision not counting the 8MHz "Mark Twain" prototype) but with a Woz Limited Edition topcase just 'cuz, a 1MB Apple RAM expansion card (for 2.125MB of memory), an original 7MHz Applied Engineering TransWarp GS CPU accelerator with the 8K cache daughtercard, and a 20MB Applied Ingenuity InnerDrive consisting of a SCSI card plus a unified drive enclosure and replacement power supply. It is connected to an ADB keyboard and mouse, a LocalTalk network box, an AppleColor RGB display, an Apple joystick and 3.5" and 5.25" floppy drives, and boots into GS/OS 6.0.1.

A few months ago after I got LocalTalk up and running and was transferring some other things to it, it started to get flaky, and would invariably crash into the Apple machine language monitor after sitting at the GS/OS Finder for a few minutes. This machine was certainly produced in the "crap caps" era and bad capacitors are in the differential diagnosis, but with so many things installed a more systematic approach would be required. Today it's the Thanksgiving long weekend during a pandemic, and I've got only dayjob work and take-out dinners to look forward to for the next several days. Sounds like an opportunity for another ... Refurb Weekend!

First, let's pop the hood and check for obvious damage. I blew out some dust with an air can, but everything looked pretty shipshape otherwise.

Looking overhead you can see the combination drive cage-power supply for the AI InnerDrive plus its ribbon cable to the controller card. Between the controller card and the enclosure is the TransWarp GS, and the Apple RAM expander card is furthest away.
Closeups of the InnerDrive and TransWarp GS cards.
Closeup of the Apple RAM Expansion card (670-0025-A).

To check for board damage, you also need to remove the power supply, since a couple large capacitors sit under it and, inconveniently with the larger size of the AI combo cage and PSU, the PRAM battery.

Clean as a whistle. I even got out the multimeter and checked the PRAM battery, and it still reads a full 3.65V.

I'm not a fan of prophylactically replacing capacitors that are not obviously bad, especially since you have a non-zero chance of accidentally breaking something in the process of fixing what ain't broke to begin with. At this point I resigned myself to having to test components independently: remove or disconnect something, fire it up and see if it crashes. I was strongly suspecting either the TransWarp card or the RAM expander, since those would certainly cause random failures. For that matter, it might even be heat related. Anyway, to get a baseline, I went ahead and disconnected the LocalTalk network, the joystick and the external drives, rebooted it and waited for it to crash.

After an hour it was still cheerfully sitting at the Finder. I fired up a game of Crystal Quest GS (ported by Becky Heinemann, who also wrote the firmware for the InnerDrive and had very nice things to say about TenFourFox). No crash. I fired up Wolfenstein 3D (ported by Logicware, which Heinemann co-founded). No crash. I played some Tunnels of Armageddon, one of my favourite GS games (no known connection to Heinemann). Still no crash.

I reconnected the joystick and let it sit for a few minutes. The joystick doesn't work too good, but it didn't make anything crash.

I reconnected the disk drives (best done with the power off, just in case) and let it sit some more. The machine had gotten good and warm with all this burning-in, so if heat was a factor, it should have been by then. No crash.

I reconnected the LocalTalk box and let it sit a couple more minutes. Boom, straight into the monitor while it was sitting idle.

I powered it off, disconnected the LocalTalk box, rebooted and let it sit a couple more minutes. No boom, no monitor, normal operation. The LocalTalk network was crashing GS/OS. And, actually, this makes sense, because the last thing I was doing with it before it got "wacky" was copying stuff from the network. It had never been connected to the house LocalTalk segment before because it never needed to be.

So what's crashing it, and why doesn't it do so right away? My guess is that the delay is because something on my household AppleTalk network transmits a packet intermittently that GS/OS's stack doesn't like. The segment itself couldn't be the cause because there were no other LocalTalk hosts up (the 486 and the Colour Classic are both connected to the segment, but neither were on). A Dayna EtherPrint-T bridges the LocalTalk segment to the main Ethernet via EtherTalk; on the other end of the Dayna is the little Macintosh IIci running old Netatalk which services old hosts, this Raptor Talos II running modern Netatalk which doesn't, and the G4 Sawtooth file server running Tiger (not Netatalk). Any one of those could be sending packets that GS/OS 6.0.1 just doesn't know how to cope with, but a cursory Google search and a look through comp.sys.apple2 didn't come up with anything similar. A mystery to be further investigated in a later entry.

In any event, it's good to see it doesn't appear to be caps nor hardware, and there are other ways to get files to the GS, so not having LocalTalk isn't the end of the world. Turns out this wasn't a Refurb Weekend after all, but who's complaining? That said, however, now that it's back in its right mind there are some future upgrades to do. The first is to consider some other means of mass storage since even old battleaxe SCSI drives don't last forever, and the InnerDrive firmware is notorious for only being able to work with a very small number of specific drive geometries (forget using a drive much later or larger than the 20MB one it has). A Compact Flash card solution exists and would seem the obvious choice rather than faffing around with another SCSI controller. The second is to put the full 8MB of RAM in it; there are some third party cards still made by homebrewers that will do this. If nothing else, between the two of them I'll be able to shoot a whole lot more Nazis for a whole lot longer, and that's always a(n Apple II) plus.

Sunday, November 15, 2020

Fun with Crypto Ancienne: TLS for the Browsers of the Internet of Old Things

The TLS apocalypse knocked a lot of our fun old machines off the Web despite most of them having enough horsepower for basic crypto because none of the browsers they run support modern protocols. Even for Mac OS X, the earliest version you can effectively use for Web browsing is 10.4 because no earlier version has a browser that natively supports TLS 1.2, and for most other old Un*ces and the like you can simply forget it.

To date, other than the safe haven of Gopherspace, people trying to solve this problem have generally done so in two ways:

  • A second system that does the TLS access for them, subsuming the access as part of a special URL. As a bonus some of these also render the page and send it back as a clickable image; probably the best known is Web Rendering Proxy which works on pretty much any browser that can manage basic forms and imagemaps. Despite the name, however, it is accessed as a special web server rather than as an HTTP proxy, so links and pages also have to be rewritten to preserve the illusion.

  • A second system that man-in-the-middles a connection using its own certificate authority; the request is then upgraded. Squid offers this feature, and can act either transparently or as an explicit HTTP proxy. Modern browsers defeat this with certificate pinning, but these browsers wouldn't have that, though you do need to add the proxy as a CA root.

The man-in-the-middle step is needed because most old browsers that are SSL or TLS aware don't want the proxy messing with the connection (it's supposed to be secure, dammit), so they open up a raw socket with CONNECT to the desired site such that all the proxy should do is merely move data back and forth. I imagine it is eminently possible on today's fast systems that an SSLv2 or SSLv3 connection's symmetric key could be broken by brute force by a transparent proxy and used to decrypt the stream, then re-encrypt it to modern standards and pass it on, though I couldn't find a public package obviously like that. (If you know of one, post it in the comments.)

There is a third alternative, however: configure the browser to send unencrypted HTTPS requests to an HTTP proxy. Most browsers don't do this because it's obviously insecure, and none do it out of the box, but some can be taught to. What you want is a browser that doesn't speak HTTPS itself but allows you to define an "arbitrary protocol" (with "finger quotes") to use an HTTP proxy for, and come up with an HTTP proxy on the back end that can accept these requests. Such browsers exist and are even well-known; we will look at a few.

But let's do one better: all these approaches above need a second system. We would like whatever functional layer we have to bolt on to run on the client itself. This is a retrocomputing blog, after all, and it should be able to do this work without cheating.

To that end allow me to introduce Crypto Ancienne, a TLS 1.2 library for the "Internet of Old Things" with pre-C99 compatibility for geriatic architectures and operating systems. What's more, Cryanc includes a demonstration application called carl (a desperate pun on curl) which, besides acting as a command-line agent, is exactly this sort of proxy. You can build it, have it listen with inetd or a small daemon system like micro_inetd, and point your tweaked browser's settings at it. If it's listening on localhost, then no one can intercept it either. Et voila: you just added current TLS to an ancient browser, and you didn't even burst any wineskins.

The browser that allows this nearly perfectly and with fairly good functionality is the venerable OmniWeb, prior to version 4.2. Although the weak HTTPS of the era can be added to OmniWeb with a plugin, and later versions even included it (I'll discuss that momentarily), it is not a core component of the browser and the browser can run without it. OmniWeb started on NeXTSTEP as a commercial product (it's now available for free); version 2.7 ran on all the platforms that NeXTSTEP 3.3 supported including 68K, Intel, SPARC and HP PA-RISC. We have such a system here, an SAIC Galaxy 1100 named astro, which is a portable ruggedized HP Gecko 9000/712 with an 80MHz PA-7100LC CPU and 128MB RAM.

carl builds unmodified on NeXTSTEP 3.3 (cc -O3 -o carl carl.c); the C compiler is actually a modified gcc 2.5. micro_inetd.c just needs a tweak to change socklen_t sz; to size_t sz; and then cc -O3 -o micro_inetd micro_inetd.c. Then we need to configure OmniWeb:

You will notice that we are running carl via micro_inetd on port 8765 in the terminal window at the bottom; the command you'd use, depending on where the binaries are, is something like micro_inetd 8765 carl -p (the -p puts carl into proxy mode). The URL we will use is thus http://localhost:8765/, though note that micro_inetd actually listens on all interfaces, so don't run this on an externally facing system without changes. We have assigned both http and https protocols to that proxy URL and OmniWeb simply assumes that https is a new and different protocol that the proxy will translate for it, which is exactly the behaviour we want. Let's test it on Captain Solo Hacker News:
Excellent! Self-hosted TLS on our own system! Let's try!
OmniWeb 2.7 doesn't support CSS or JavaScript, and its SGML-to-RTF renderer (!) is not super quick on an 80MHz computer, but it's remarkable how much actually does work and it's even somewhat useable.

What about later versions? As most readers already know, NeXTSTEP 3.3 became OpenSTEP 4.0 (dropping PA-RISC, boo hiss), and then after Apple bought NeXT, or possibly the other way around, became Rhapsody 5.0. Rhapsody was a curious mixture of a not-quite-faithful facsimile Mac Platinum interface and OpenSTEP underpinnings and was eventually a dead end for reasons we'll mention, but Apple did turn it into a saleable product, i.e., the original Mac OS X Server. OmniWeb 3 runs on Rhapsody, and of course we have such a system here, the best laptop to run Rhapsody on: a PowerBook G3 WallStreet II "PDQ" named (what else?) wally with a 292MHz G3 and 384MB of RAM.

We built carl and micro_inetd on wally in the same way; its cc is actually a cloaked gcc Configuring OmniWeb 3 is a little trickier, however:

You'll notice the non-proxied destinations and protocols are off. When these were on, it seemed to get confused, so you should disable both of them unless you know what you're doing. Again, note the terminal window in the background running carl via micro_inetd with the same command, so the URL is once again http://localhost:8765/, and both http and https are assigned to it. You can see was already working but here it is without the settings window in the way:
And here's Hacker News:
There is still no CSS, but there is some modest improvement in the SGML rendering, and the G3 rips right along. I actually rather like Rhapsody; too bad not much natively runs in it.

Rhapsody was a dead end, as I mentioned: for the Mac OS X Public Beta, Apple instead introduced the new Aqua UI and made other significant changes such that many Rhapsody applications weren't compatible, even though they were both based on early releases of Darwin. Nevertheless, the Omni Group ported OmniWeb to the new platform as well and christened it OmniWeb 4. OmniWeb 4 was both better and worse than OmniWeb 3: it has a much more capable renderer, and even does some CSS and JavaScript, but it is dreadfully slow on some pages such that the 600MHz iMac G3 I ran it on seemed significantly slower than the Wally which ran at less than half the clock speed. In version 4.2 OmniWeb started using the system proxy settings instead of its own (ruining our little trick), and with the availability of Apple WebCore with Safari in Mac OS X Jaguar 10.2 a new and much faster OmniWeb 4.5 came out based on it. If it weren't for the fact I was already a heavy Camino user by that time I probably would have been using it too.

This leaves us with 4.0 as the last OmniWeb we can use carl for, but 4.0 was written for Cheetah 10.0 specifically and seems to have issues resolving domain names beyond Puma 10.1. Not a problem, though, because carl can do that work for it! Here we are working on christopher, a tray-loader strawberry iMac with a 600MHz Sonnet HARMONi G3 upgrade card and 512MB RAM running 10.2.8.

The Omni Group still kindly offers 4.0.6 for download. Drag the application from the disk image to /Applications, but before you run it, open the package in the Finder and go into Contents and Plugins. This is one of the releases that included HTTPS support, so drag HTTPS.plugin to the Trash, empty the Trash and start up the browser. Configuration is much the same as OmniWeb 3 but with one minor change:

Again, we have the Terminal open running micro_inetd and carl (it's actually running the binaries copied off wally!) on port 8765, but since OmniWeb 4 can't resolve domain names on 10.2, the URL is Non-proxied destinations and protocols are likewise off. With that, here's Hacker News:
And here's
The rendering improvements are obvious but so is the significantly increased amount of time to see them. By the way, if you're wondering where the window shadows are, that's because I run Shadowkiller on this iMac. Without it, its pathetic Rage Pro GPU would be brought to its knees in Jaguar.

So that's it for OmniWeb. What other browsers can we use for this? Surprisingly, an even more famous name: NCSA Mosaic!

NCSA Mosaic was available in multiple forms, including one also maintained by yours truly, Mosaic-CK, which descends from the last Un*x release (2.7b5) and the only NCSA browser for which the source code is known to survive. With the Mosaic-CK changes it builds fine on present-day macOS, Mac OS X, Linux and others. Like OmniWeb it treats HTTPS as a new and different protocol and you can tell Mosaic-CK to use a proxy to resolve it, so here's Mosaic-CK 2.7ck13 on my Raptor Talos II running Fedora 33 showing Hacker News.

You can set up the proxy rules with the interface, but it's simpler just to make a proxy file. If you run Mosaic-CK once, a preferences folder is created in ~/.mosaic (or ~/Library/Mosaic-CK for Mac). Quit Mosaic-CK and inside this folder, create a file named proxy like so:

https 8765 http
http 8765 http

Each line must end with a space before the linefeed. Then, with carl running as before, Mosaic-CK will access HTTPS sites through the proxy.

Naturally this doesn't extend the functionality of Fedora 33 very much though (especially since I'm typing this in Firefox on the very same machine), so what about systems that can run Mosaic-CK yet have no other options for modern TLS? One of those is the very operating system Mosaic-CK was originally created for, Tenon's Power MachTen.

Power MachTen is essentially OS X inside-out: instead of running Classic on top of a Mach kernel, Power MachTen and its 68K ancestor Professional MachTen run a Mach kernel on top of the classic Mac OS. I have it installed on bryan, my 1.8GHz Power Mac G4 MDD running Mac OS 9.2.2, which you earlier met when it chewed through another power supply (as MDDs do). Power MachTen has its own internal X server on which it runs AfterStep by default and includes Motif libraries. Here's the MDD viewing Hacker News; notice the classic Mac menu bar and the xterm running micro_inetd and carl.

However, even though Power MachTen uses gcc 2.8.1 and no modifications to the source code were required, some hosts consistently have issues., for example, throws a TLS alert 10 (unexpected message), and some other sites that appear to use a similar server stack do the same. Still, this is substantially more than OS 9 can do on its own. What if we moved this to a "real" Apple Unix -- A/UX?

Most readers will know what A/UX is, Apple's SVR2-based Unix for most 68K Macs with an FPU and MMU. It is notable in that it also includes System 7, allowing you to run both standard Mac apps of the day as well as compile and run binaries from the command line (or use the built-in X server), so we'll run it on spindler, a Quadra 800 clock-chipped to a 38MHz 68040 with 136MB of RAM running A/UX 3.1. There is a Mac version of NCSA Mosaic which for some reason uses a different version number, though the source code is apparently lost. It runs just fine in A/UX's Mac Finder, however, so we'll install NCSA Mosaic for Mac 3.0b4. Instead of the included Apple cc we'll use gcc 2.7.2, which is available from various Jagubox mirrors; carl builds unmodified and micro_inetd just needs the socklen_t fix.

3.0 is the only release of NCSA Mosaic that allows suitable proxy settings, at least on the Mac (2.x used "gateways" fixed to conventional protocols instead). We define http and https protocols, then point them both at localhost:8765 (use "Remote" so that you can fully specify the host and port). carl is already running under micro_inetd in the background (see the CommandShell window).

Here is 3.0b4 (trying to) displaying Google. I'd love to show you Hacker News, but it can't cope with the reflow and crashes. These crashes don't occur in Mosaic-CK, nor does the <script> spew; I don't know if the Windows version of Mosaic does this but I don't have a Windows port of carl currently.
3.0b4 also doesn't like getting HTTP/1.1 replies from servers that answer with /1.1 responses even to /1.0 requests. That's probably inappropriate behaviour for them but Mosaic doesn't even try to interpret the reply in those cases. The -s option to carl can fix this for some sites by spoofing /1.0 replies (though the headers are passed unchanged), but some sites won't work even with that.

So, since Mac Mosaic 3.0b4 is persnickety and crashy as heck, do we have an alternative that can be configured in the same way? Not the usual suspects, no: not Netscape, nor MSIE, nor NetShark, nor MacWeb. But incredibly, MacLynx works!

MacLynx is a port of Lynx 2.7 to 68K and PowerPC with (as befits Lynx) very light system requirements. The source code is available, though the binary I'm using here is monkeypatched to fix an issue with an inappropriate Accept-Encoding header it sends. Configuring it is very un-Mac-like: edit lynx.cfg and set http_proxy and https_proxy appropriately, as we are doing here in BBEdit 4.1.

Unfortunately MacLynx still has some other problems which will require a trip to the source code to fix, including not knowing what to do with text/html;charset=utf8 (so no Hacker News). Similarly, carl on A/UX has the exact same failures on the exact same sites in my internal test suite as it did on Power MachTen, which makes me wonder if something in Apple's lower level networking code is responsible (so no either). But, hey, here's Google over TLS, and there's no script-spew!
This problem doesn't occur with apps running in Classic under Mac OS X talking to carl running in the Terminal, by the way. That said, if you just want TLS 1.2 on Mac OS X Tiger, you could just run TenFourFox and even get TLS 1.3 as part of the deal.

Anyway, this entire exercise demonstrates that with a little care and the right browser you can bolt modern cryptography on and put at least some of these machines back to work. I'm planning to do further ports to IRIX (it builds already but MIPSPro c99 miscompiles some sections that need to be rewritten) and SunOS 4 (needs support for the old varargs.h), and I've got an AlphaPC 164LX running Tru64 here doing nothing as well. I'll have to think about what browser would be appropriate for IRIX other than Mosaic-CK, but Chimera runs nicely on SunOS 4 and the source code is available, and it doesn't need Motif (so it could even be an option for A/UX or older HP/UX). For classic Mac, MacLynx works very well already, so if we can fix its minor deficiencies and make it a little more Mac-like I think it will do even better on 68K systems in particular.

Of course, a still better idea would be to simply integrate native HTTPS support into those classic browsers for which we do have the source code using Crypto Ancienne itself rather than carl as a proxy. That's an obvious goal for a future release of Mosaic-CK.

And, well, maybe this is an opportunity to make Gemini appropriate for retrocomputing. A Gemini client becomes possible now that we have a TLS 1.2 client, and its lighter weight document format would be an especially appropriate choice for these machines. We could even bolt it onto these browsers here by defining a new protocol gemini:// for them and writing a proxy to translate Gemini to HTTP/1.0 and its document format to HTML; you could start with carl itself and make the appropriate modifications. Anyone feel like a weekend project?

Crypto Ancienne is available on Github.

Friday, November 6, 2020

MacLynx "beta 2"

Only a few people remember there was once a native Lynx port to the classic Mac OS, circa 1997. A text-based browser on a GUI-only computer would seem contradictory, and certainly wasn't congruent with the HIG, but it was an honest-to-goodness Lynx 2.7 in a terminal-like window and it worked. It also has very low system demands (basically "just" System 7), making it one of the few web browsers you could even run on a Mac Plus. I used it quite a bit on a Mac IIsi, which was the first Mac I ever owned and for awhile the only web browser that machine ever ran.

Dusting it off as a test case for a secondary project, I discovered it has a problem which was not apparent during the days I used to use it: it inappropriately sends a Content-Encoding header that says it can accept gzip, but (possibly a MacLynx-specific bug) it is unable to decompress and view any pages sent accordingly. Back in the day not many servers supported that, so no one really noticed. Today everybody seems to. As a stopgap I figured the easiest way to fix it was simply to make that header incomprehensible and thus the server would ignore it, so I monkeypatched the binary directly to munge the gzip string it sends (the data fork for the PowerPC version and inside the DATA resource for the 68000 version). Ta-daa:

Now, credit where credit is due: it looks like Matej Horvat discovered this issue independently about a half-decade before this post. His solution was a bit better: he munged the Content-Encoding string itself rather than its value. This is not quite so straightforward in the 68000 version because of how strings are encoded in the DATA resource, but as a belt-and-suspenders approach I went ahead and implemented his approach too (after all, I suppose it's possible my munged string may match an encoding method that could be supported in the future).

Olivier's old site has not been up for years, so I resurrected it from a local copy and MacLynx is again hosted this time on Floodgap. Unfortunately I don't have his old French localization, but I do have the source code also (for CodeWarrior Pro 2), and you can download this unofficial "beta 2" from there as well. Both the 68K and PowerPC versions have both patches.

Sharp-eyed readers will have noted something a little odd about the screenshot. The secondary project works, but needs some polish and a couple minor bug fixes. More later.

Thursday, November 5, 2020

A Gopher view of Gemini

With the possible exception of Google, and I fully acknowledge the irony that this blog post is hosted on a Google property, everyone thinks the Web has gotten too big for its britches. And, hey, maybe Google agrees but it's making them money and they don't care. In this environment of unchecked splay, the browser has since metastasized to an operating system running on an operating system (you could even argue that simplified document presentations like Reader View are browsers themselves running on this browser-as-operating system), web services drive browser technology to facilitate greater monetization of users rather than greater ability to access information, and the entire client stack has reached the level of baseline bloat such that many operating systems and older systems are cut off completely.

As a consequence I think one of the biggest reasons for the Gopher protocol's return is as a reactionary backlash against the excesses of the Web. But this motivation is by no means universally central. When I started in 1999 (which is now I did it to preserve a historic technology that I always wanted to be part of (I even maintained news and weather through my .plan for awhile in university since I didn't have server hardware of my own back then), and many of us early Gopher revivers probably approached it from the same perspective. This affects our view of the protocol's niche as well, as I'll explain a little later.

Really, Gopher isn't a direct ancestor of the Web either; at most it can be argued to be an influencer. Gopher actually originated as an attempt to put a friendlier frontend onto FTP (the venerable File Transfer Protocol that Google is doing its level best to kill too), meaning Gopher menus aren't hypertext any more than a file directory listing is, and today's conceptualization of them as primitive hypertext is merely an artifact introduced by the early (surprise!) web browsers which subsumed it into their core metaphor. While it's become the dominant paradigm for modern Gopher holes and much effort is spent shoehorning menus into HTML-like documents, it's really not how Gopher menus are designed to function, despite such abuse even by yours truly who should know better (see, you really can get away with hypocrisy if you admit it).

All that wind-up was to offer a backdrop to contrast against Project Gemini. Gemini has caught a lot of people's fancy. Gemini finds Gopher too limiting and the Web too big, so it tries the Goldilocks approach. Rather than Gopher's strong menu-document hierarchy, Gemini lets you have documents with (horrors) inline links and formatted text, but the way in which you specify that document is intentionally bowdlerized compared to HTML so people can't keep adding features. Gopher has a trivial open wire protocol where the selector and a handful of item type characters do all the work, while heavy HTTP has a header and method for every kink and everything is encrypted, so Gemini splits the difference by requiring TLS 1.2 with SNI and MIME types but doesn't cache, compress or cookie. It's a vision of the web where if you want to view an image, you click on a link, scripting is for playwrights, and if you want to track a user across visits, it's hard and imprecise.

And that's the core difference, philosophically, between Gopher and Gemini: Gemini truly is a descendent of the Web, it's just one with an eye on the Gopher renaissance that happened inbetween. From the view of us inhabiting Gopherspace, it doesn't really occupy the same present-day niche: it isn't historical, because it's a new protocol, and it isn't retrocomputing, because the document format is non-trivial and its TLS requirement excludes even some relatively recent systems. It does this arguably intentionally, in fact. You can run a Gopher client on a Commodore 64 and I just wrote one for a 68030-based system; I don't see that happening for Gemini. Just like it won't displace the Web, Gemini isn't going to displace Gopher — though in fairness nor does it try — because it just simply doesn't scratch the same itches.

What Gemini will do, however, is essentially freeze Gopher in place by peeling off those who find it too constraining. Gopher+ was stillborne and merely a few consensus expansions have occurred like hURLs and CAPS, which are really just conventions anyway since they don't actually change the core protocol or how menus are parsed and handled. A few Gopher sites offer Gopher over TLS which only some specialized clients and proofs-of-concept can connect to. But since Gemini mandates TLS and every Gemini client already speaks it and some speak Gopher besides, plus the minimal work necessary to transcode Gopher menus into text/gemini, Gemini seems a more likely transport for a "secure Gopher" than bolting TLS onto a protocol that was never designed for it in the first place. These are the folks least likely to stick with Gopher because of its limitations anyway, so I don't see the phenomenon as anything that would harm the Gopher community. And freezing Gopher more or less as it was only enhances its historicity.

I've observed before that the Web and Gopher can coexist, and I don't see why Gemini and Gopher can't either. Despite their superficial similarities and some overlap in motivations, they don't really serve the same purposes and that's absolutely a-okay. From my view on port 70, anything that gets us to a point where people start thinking about alternatives to an increasingly out-of-control Web is a positive, and I'm glad both Gemini and Gopher are doing that.