Sunday, June 27, 2021

The parallel universe of FireWire hubs

I like FireWire and I still use FireWire (I've even used it to power a WiFi-to-Ethernet connector on a PowerBook G4), but this is a retrocomputing blog, and for the larger consumer market IEEE 1394 is now just an odd little niche. Many postmortems have been done on the death of FireWire and it still pops up in weird places like the military (see AS5643 and descendants like AIR5654A), low-latency audio, infotainment systems and some security and monitoring devices, but I think the biggest thing that doomed it was that it was perceived as a competitor to USB and failed to sufficiently differentiate itself. Device manufacturers didn't help: with the exception of some high-end A/V equipment and tape camcorders, the same devices (mass storage, webcams, scanners) showed up with USB ports as did with FireWire ports, and there were many commodity PCs that lacked FireWire entirely and many devices that were USB-only, so USB connectivity won out. Licensing costs no doubt played a major role but market perception greatly hastened the process. FireWire still has infrastructure advantages in topology, latency and segment length but it also makes devices more expensive, and even these points in its favour are outweighed by the comparatively prodigious peak bandwidth in USB 3 despite FireWire winning the bandwidth war handily for many years.

One example of this was the parallel universe of FireWire hubs. If you think of FireWire as "a big USB" then a hub wouldn't seem so strange, but FireWire was actually meant to replace SCSI. SCSI and FireWire are peer-to-peer: any device on the bus can talk to any other device, unlike USB where each bus has at most one host and the host does all the initiation of data transfer. (USB On-The-Go still has one host and one host only; it just allows certain devices like your mobile phone to swing both ways.) The point-to-point capabilities of USB 3 notwithstanding, a USB hub has one upstream port for the host and multiple downstream ports for the devices. A FireWire hub, however, is like getting a longer internal SCSI cable; more devices simply exist on the same bus. Connecting multiple FireWire hubs just makes a bigger bus because all the ports are the same.

However, this difference was mostly lost on consumer users because a FireWire hub, possibly intentionally, superficially resembled an oddball USB hub with weird connectors and a higher price. And both existed largely for the same reasons, first and foremost to give you more ports, and secondarily also to inject power into the system where multiple bus-powered connections could make voltage sag. (They could also improve stability by smoothing out impedance mismatches between controller chipsets, which solved an issue with my Sawtooth G4 freaking out when I connected two FireWire drives to it directly, even though they were both self-powered and the effective topology with a hub was still the same.) Nevertheless, FireWire hubs were absolutely distinct devices and at least in some respects catered to different markets. So what did this parallel universe look like?

Belkin was probably the most prolific manufacturer of consumer-level FireWire 400 hubs. These were targeted mostly at Mac users, though they would work just fine with PCs. Their first product was the F5U524, which was a four-port powered FW400 system (one port connected to the computer, so you got three extra ports). They sold far more of the F5U526 6-port hubs, though, and the most common form factor was this one:

Yes, that particular unit was a Fry's reject that eventually ended up in the Weird Stuff bargain bin (rest in peace), but the price tag gives you an idea of why consumers might have found it unappealing. It also came in silver, shown here. I have a silver F5U526 connected to my Quad G5 and my Sawtooth G4.
Belkin later made a awkward curved variant with one port in the front and five in the back. Although still marked as a F5U526, it was only rated for 1.25A instead of 1.5A.

Nearly as prolific was Kramer Tools, who still specialize in high-end A/V hardware. In addition to their rack-mount FireWire switchers (controllable by IR remotes or RS-232/485) and FireWire active repeaters, they made four-, six- and even eight-port powered FW400 hubs intended for prosumer and professional studios. These were definitely not Best Buy specials. Made of durable steel, you'd probably fracture someone's skull if you lobbed it at them. Their eight-port FW400 was the biggest FW400 hub I could find advertised.

A similar consumer device to the Belkins was IOGear's GFH610 "FireWire Hud" [sic]. This was a powered 6-port FW400 unit, five in the rear, one in the front. The label pic was stolen from an eBay auction (not mine, not affiliated); hope they don't mind.
The label typo inspires confidence. If you had only half the confidence, the GFH310 only gave you half the ports (two in the rear, one in the front).

Potentially the most outrageous FireWire hub (thanks jonahjoselow) was the Hubzilla. It's like it sounds: Godzilla with four FW400 ports embedded in his spine. I would destroy cities too if I were subjected to this sort of torture. (Image from PCWorld.)

It had an MSRP of $75, an external power option and LEDs for connectivity state and wrecking Tokyo. However, it's not clear if any actually got sold.

As a degenerate case some FireWire devices offered daisy chaining with input and output ports, which could be considered a two-port hub of sorts (just one port being the internal device); the Lexar Professional Compact Flash card reader (shown here) and the Iomega Minimax are examples.

I remain a big fan of the Lexar FW CF reader, which transferred data from my camera to my Quad G5 in record speed and certainly quicker than any USB 2.0 card reader. Compared to even the USB 3.0 reader on my Talos II, it's still no slouch. However, the Lexar relies on bus power, so it's a suboptimal hub for that purpose.

As FireWire devices became less common, some manufacturers perceived a market for dual hubs — i.e., USB 2.0 and FireWire hubs in the same physical unit. These were not complicated devices technologically because they weren't anything more than two devices on one board in one case, with a plug each for the USB and FireWire sides, and they don't convert USB to FireWire or vice versa. All of the combo hubs I've personally seen have four USB ports and three FW400 ports, like the IOGear GUH420:

Note, however, that you really do get four extra USB ports; devices of this sort had an additional B-jack to connect to the host computer and presented four USB A-ports for downstream devices. On the FireWire side, though, and like the FW-only units, one of the FireWire ports had to connect to your computer and thus you only got two extra FW ports (not three). This was not a good look for comparison purposes.

Additionally, unlike the FW-only hubs which universally offered an external power option, some of these were powered and some of them weren't, and almost all of the ones that were powered only seem to power the USB side (the tipoff is the wallwarts only provide 5 volts, not 12). The same basic notion was used by devices from Hewlett-Packard (targetted at Windows PCs, with a micro-USB jack for the PC connection) and zombie-Compaq after HP bought them. SIIG sold an internal USB-FW hub for 3.5" drive bays with a 4-pin FW400 and a 6-pin FW400 (plus the internal 6-pin FW400 to connect to the motherboard), but those are electrically identical modulo the power pins, so they're really the same sort of device. It had a connection for 4-pin motherboard power, which because that has a 12V rail, was one of the few devices to power both sides of the hub.

Belkin also made the F5U507, an external dual device for the Mac market which was also unpowered:
An interesting variation on the 4-3 port design was the Moshi iLynx. With all the ergonomics of an overgrown doorstop, it offered one FW400 and two USB 2.0 ports on each side of the "hump," and thus only advertised itself as a 4-2 device. However, it's actually the same as the others here because it has male FireWire and USB plugs (not jacks) for the host exiting the back. This means it's still electrically a "4-3," but at least it was honest about how many more ports you really got. It was also not powered.
FireWire 800 was a big jump for mass storage performance and kept the protocol relevant for awhile longer, even though for video applications old brown FW400 was still just dandy. By then, however, the market was much smaller and most of the hub options either evaporated entirely or moved prosumer. Belkin did not make an FW800 hub, dual or otherwise, though their Thunderbolt (and other Thunderbolt) hubs have a single FW800 port. Moshi upgraded the iLynx to the iLynx 800, but was otherwise the same form factor:
Kramer Tools also expanded their offerings to FW800, though they only produced three and four port varieties (there was also a two-port which is properly considered a repeater as it's designed to connect to another repeater over RG-6 coax). If you wanted an eight-port FW800 hub, then you had to have the Nitro AV. I don't have eight FW800 devices, but I still had to have it:
This monster comes with a 3A power supply, more than enough to keep every power-hungry beast on the bus docile and purring:
Some companies are still producing, or at least still make available, FireWire 800 hubs. Amazon lists a no-name 3-port device:
I don't know anything about the manufacturer and it seems a lot of people think it's junk.

And yes, I'm still using FireWire, but only for the same specific purposes I used it for back in the day. Besides A/V (my HDV roadgeek camcorder, my Canopus ADVC-300 framegrabber and my original iSight), I almost entirely use FireWire for booting and exchanging files with the Power Macs. But it lingers in prosumer A/V applications like recording studios and multicamera security setups, and big multiple-endpoint configurations like that are where FireWire's topology really shows off its flexibility. If you're in that category and also absolutely nucking futs, I spotted this auction for a terrifying 16-port FW800 device in an external PCIe enclosure (not my auction, not affiliated).

Put enough juice through it and it might even start a FireWire fire. Try that with USB. Just don't blame me if you succeed.

Monday, June 7, 2021

Monterey? BTDT. Try Project Monterey.

Apple's announcement of the next version of macOS, Monterey, means my 2014 MacBook Air now gets to join my Quad G5 in the "not supported" category (not that I care, it's Mojave Forever). But it's a good reminder of the previous Project Monterey, a multicorporation attempt to make the One Unix To Bind Them All from IBM AIX, SCO (then the putative holders of the True Unix) and DYNIX/ptx that would run on the One Architecture To Bind Them All, IA-64 (a/k/a Itanium). In case you weren't yet with the new hotness, it would run on your old and busted 32-bit x86 hardware, too.

Today you'd laugh your fool head off at the very thought of "Itanic" taking over the world, but when it was announced in October 1998 Monterey was a credible threat. By having IBM, SCO, Sequent (which IBM bought) and Intel as backers its ascent to dominance seemed inevitable, and its ability to run on existing and future hardware along with the jackboots of AIX and the multiprocessing strength of Dynix was thought to be strongly appealing to high-end enterprise IT. (The issue of IBM's then-contemporary POWER server line and Intel's Pentium server offerings potentially in direct competition was handwaved away.) A long list of the usual hangers-on backed it at the time as well, including Acer, Compaq, Groupe Bull, Samsung and Unisys.

The damn thing actually shipped, too, because most of it was based on already extant code. Project Monterey's first release was to essentially repackage AIX on POWER, and UnixWare 7 and DYNIX/ptx on x86; in 2000 the next wave and the "real" Project Monterey was AIX 5L for IA-64, which IBM actually sold on request and apparently had some small number of running systems in the wild.

Oddly, what doomed Project Monterey was Linux on IA-64, the so-called "Trillian Project" that emerged in mid-1999. Intel, always one to hedge its bets, was part of that effort along with Silicon Graphics, VA Linux and Hewlett-Packard, but most of the work was done by Cygnus before their eventual purchase by Red Hat. SGI and HP, of course, made their own Itanium machines; HP, to its current chagrin, still does. As if in response IBM promised Monterey would have strong Linux compatibility, but if you needed Linux compatibility as a primary feature, why not just run Linux? A Caldera executive was quoted in InfoWorld that year saying, "I would expect over the next one to two years [Linux for IA-64] will catch up and in some cases exceed Monterey, for no other reason than the sheer number of people contributing to Linux."

And, well, that's exactly what happened; Itanium outlasted Monterey, and Monterey went down in flames. IBM sold less than 50 licenses by the time Monterey was quietly shot in the head in 2003, though some sources say IBM had already pulled out as early as 2001. Its breakdown directly led to the SCO vs IBM lawsuit in which SCO went bankrupt and in a related case was found never to have had the Unix license to grant to IBM in the first place. Itanium, for its part, will cease shipments a little over a month from now on July 29, 2021.

Somehow I just don't see this Monterey being that interesting.

Sunday, April 25, 2021

Refurb weekend: Hewlett-Packard 9000/350

I'm not really a "big iron" enthusiast; I've always liked small systems (for one thing, you can collect more of them without annoying your spouse, though my wife points out for the record she is generally tolerant of my hobbies). One really must specialize in those kind of machines as a collector, not only for their power and space demands, but also their sometimes unusually complex maintenance requirements.

That doesn't mean I don't have larger machines, however. Besides my three Apple Network Servers (about the size of a decent dorm fridge), a PDP-11/44 in storage I'm not sure what to do with yet and the 2U-in-a-tower IBM POWER6 which runs Floodgap, my other "big" system is my only 1980s-era Un*x workstation, a 1987 HP 9000/350. It came to me already named (homer).

Homer's system processing unit has a 25MHz 68020 and 20MHz 68881 FPU paired with HP's custom MMU (not a 68851) and 32K of cache, which HP claimed was four times as fast as the VAX 11/780 at integer math. It is closely related to the slower, stripped-down 330 (both CPU and FPU at 16.67MHz, no cache; in fact, HP calls the 350 a 98562B and the 330 a 98562A). 9000/300 systems are unusually modular by modern standards: the SPU is in a separate, self-contained box from the rest of the peripherals, all of which are installed in a custom HP steel rack. As internal options it has a HP 98545A colour graphics board (in the bundled configuration HP sold as the 350C) that delivers 1024x768 graphics with 16 colours, 16MB of parity RAM (up to 32MB, but that needs the three-connector system bus plate which I don't have) and a standard Human Interface board (HP 98562-66530, with later versions sold as the HP 98247A) containing low-speed HP-IB, NIC (10Base2, the last Thinnet machine on my household network), HP-HIL (with 46010A keyboard and 46060A mouse), audio and RS-232, plus the HP 98562-66531 optional high-speed HP-IB board necessary for booting from a hard disk. The monitor is the largest CRT display I own, a 19" Sony GDM-1902 that HP repackaged as the 98782A, capable of 1024x768 megapixel graphics.

Over high-speed HP-IB it is connected to a HP 6000 C2203A 670H, an indestructable 670MB CS/80 hard disk with the system name on the front that will outlast the cockroaches. I also have a benighted 9144A tape drive that refuses to stay locked in the rack and requires pre-formatted IOTAMAT QIC cartridges yet won't read them even with a retrofitted capstan, and a 9122D dual DS/DD 3.5" floppy drive. (Yet to be racked, pending investigation, are a 600/A CD-ROM and a 6400 C1511A 1300H DDS-1 tape drive.) It runs HP-UX 8.0, though I am told the NetBSD port is excellent.

In 1987 this would have been a heck of a computer, but you would have paid somewhere north of $50,000 for this configuration which would be a whopping $115,000+ in 2021 money. For comparison, the most I've ever personally paid for a computer was $11,000 for my POWER6, purchased used in 2010 (in 2021 about $13,300), whereas this machine I got for "come and get it" over a decade and a half ago (tip of the hat to Stan and Kevin). It also came with a separate 9000/319C+ system unit, but that's in storage since the 350 is much more powerful (the 319C+ is essentially a consolidated, cost-reduced and minimally upgradeable 330). The Homer Simpson doll was included.

A refurb weekend was planned for Homer for awhile owing to the dead clock battery (it uses the slightly larger 2325 lithium coin cells instead of the more typical 2032s), and it had always had a flaky 10Base2 connection to the network backbone which I chalked up to cabling because I could usually fix this with messing with the cable and resetting the LAN hardware in /usr/bin/landiag.

This time, however, no amount of tweaking and cajoling could get the network connection back up again. The time had finally come for ... a Refurb Weekend!

The 350 and relatives subdivide internally into RAM board(s), the CPU board, any graphics and other option boards, and the Human Interface board, which is where most of the peripheral connections reside in the default loadout. Its original HP 98562-66530 board looks like this:

The low-speed HP-IB, HP-HIL, audio, RS-232 and NIC are all consolidated onto a single unit, replacing the separate boards (HP 98625 HP-IB, HP 98643 LAN, HP 98620B DMA and HP 98644 serial port cards) required in earlier models. The golden board piggybacked on it is the 98562-66531 high-speed HP-IB board with an integrated cable, which is a functional substitute for the HP 98625B. The unified Human Interface board idea is nice in that you don't need a separate expansion box to get a good mix of devices but bad in that repair is correspondingly much less granular.

The self-test screen showed a valid MAC address for the NIC (the 080009... code), which suggested the MAC portion was working and the problem was either the port or the Thinnet PHY (you kids today have it easy with your newfangled integrated NIC chipsets). On a board of this era they would be separated, and later we'll demonstate this, but you can already see that everything was soldered down and not socketed. Since I was uncertain at the time what really was the fault, let alone what I would actually replace the faulty component with, I decided to see if I could simply replace the board.

This turned out to be serendipitous because someone was selling a two-pack of 98562-66534 Human Interface boards for a very reasonable price.

("MADE IN USA": don't see that much anymore!) These newer boards were introduced with the later 360 and 370, but because those SPUs are also quite similar to the 350, they'll work just fine in a 330 or 350. In particular the 66534 variant was especially handy to find because it had a more conventional AUI connector to the MAC (the 360/370's 66533 variant was also Thinnet). Just make sure when you get the board that you slide it into the card guides and fully engage the connectors, or you'll get weird DMA and device failures like this:

After an initial moment of panic, making sure the board had a good connection made the problem go away. Both of them checked out and passed the system self-test. Still, since one had thumbscrews and the other didn't, I decided to use the thumbscrewed one. First, let's replace that bad old battery which was almost certainly burned out as well:

A nice 3 volts and change. Next, let's move the high-speed HP-IB board over (the 670H cannot be booted from the on-board low-speed HP-IB). The integrated cable needs to be removed first, so a bit of nylon spudger action frees that up:

With the cable disconnected, removing the board is then a matter of removing the four screws holding it on its standoffs and levering it out of its connector with the spudger again:

Inspecting the 66531 board.

No damage, pin headers look good. Lots of glue logic and not much else.

Now to slip off the cable. The integrated cable has two metal chokes which serve to orient it on the rear plate. You don't need to remove these chokes, but you do need to slide the part of the plate holding the backmost choke off to the side (a pair of pliers helps). Don't pull out that stud holding it; the stud is merely an axis. Just grab and pull the plate tab itself.

With the plate tab off, the cable can now be gently pulled out of its clamp.

When installing the high-speed board in its new home, connect the HP-IB cable to the board first (there's a hollow tab that serves as a key; in the installed position this hollow tab should be up and visible) so that you don't trap the cable under the card installing it. The 66534 board does not have a moveable tab, just a gap for the divot in the rear choke to sit in as shown here. Also, the cable clamp is facing up on the 66534 and not to the side as in the 66530, so the whole thing just goes straight down onto it and the board connector.

Seat the board and put the screws back in. It may flex a little until it settles into its connector. I then made sure the DIP switches matched between the old board and the new board since they would be configured the same way.

One last detail is what we'll use to connect it to the network. While my hub does have an AUI port, so I could just run a straight-thru DA-15 cable, might as well put that box of MAU transceivers I have to good use. I've been in this business long enough to even have some favourite brands:

BoseLAN MAUs have lots of blinkenlights and are the most compact, but this model's RJ-45 jack is to the side, which would be right in the way of the high-speed board's HP-IB cable. (BoseLAN got bought by Cable Design Technologies, which later merged with Belden.) I am also a big fan of Allied Telesyn gear (now Allied Telesis) -- that 10MBit backbone hub is a AT unit that has been in almost continuous service since about 1999 -- and their MAUs are also very good but I don't like opening up NOS boxes if I have something loose that will suffice. So I dug out a Transition (still around, apparently) MAU which has very few blinkenlights but wasn't sitting pretty in a new box either. Yes, I've got a shoebox stuffed full of these things.

Installed the new board, but not without a little bit of blood from the side rails. I'm not sure the degrading foam from the sides of the rack are so good for open wounds either.

Booting HP-UX. No more errors!

And testing out the new network card by firing up the Chimera web browser under HP VUE. CDE jockeys will recognise the Visual User Environment as an ancestor, not least of which due to the Motif interface, and indeed CDE was strongly influenced by it.

After all that, a post-mortem: was the board repairable? Other than minor differences in chip and component revision (and tape covering an EPROM window, which was replaced by a conventional ROM on the later card), the only differences of significance between the 66531 and 66534 are in the corner near the bar code where the AUI or 10b2 port would be. The most obvious change is a large chip marked Reliability 2VP5U9 ("QUALITY IS RELIABILITY") which is not present on the 66534. The 2VP5U9 LAN-PAC is a DC/DC converter that turns up many places, including the Commodore A2065 Ethernet card, and according to its blurb "is designed to provide power and isolation for Local Area Network transceiver chips."

The pinout for these things is quite simple (here is a scan from the datasheet); most of the pins are wired together. There are other variants of this part but this one specifically serves for Thinnet (which was also called "Cheapernet" because it was cheaper than the alternatives, as shown in the table).

The underside of the board shows its connections. The chip itself is at U20. There's really only one line involved here, which naturally is one of the outputs.

With the pinout and following the lone trace, it looks like the 2VP5U9 powers U14, which is some glue logic also not on the 66534, and the surrounding discrete components, but not T1 (you'll notice the traces carefully avoid its pins), which is preserved on the 66534. Helpfully the region is outlined in a lighter green than the PCB, which likely constitutes the entirety of the PHY, but any of these components or any combination thereof could have been faulty. HP warns about this in the service manual: "Field Repair Philosophy for the Model 330/350 Computers and the HP 98568A Opt. 132 and 98570A Expander is assembly, or board level." Well, I guess that's what we ended up doing anyway.

I'm happy to have it fully working again, though it's sad not to have a reason to mess with Thinnet anymore. Or, maybe it's not that sad: I remember how much I hated it in large deployments. But now that it's back in action I'd like to get at least one tape drive working too; that and a port of Crypto Ancienne will be the next project(s).

Tuesday, April 20, 2021

The better way to get VICE on Ethernet with SELinux

Although I was a registered hardcore user of Power64 when my daily driver was still a Power Mac, now that I'm a daily Linux user on this Raptor Talos II the best Commodore 64 emulator is clearly VICE, the Versatile Commodore Emulator. It not only has highly accurate emulation, but can talk to real disk drives over OpenCBM (I use it with a ZoomFloppy xum1541) and even emulates a whole mess of peripherals, including Ethernet cartridges like the RRNet and clones (on my real Commodore 128, I use a 64NIC+).

However, I'm a Fedora user and SELinux is on by default. SELinux will really ruin your day here because it (quite reasonably) sees a random user application trying to tunnel out a network connection through libpcap/libnet as a security risk and disables it by policy. You find this out the hard way by trying to enable the Ethernet cartridge from the VICE preferences interface and getting a message you need to run it as root. I don't run things like Commodore emulators as root, spank you very much.

Fortunately, there's an easy, (probably) one time workaround; with libpcap and libnet installed (using tun/tap isn't supported yet), you will have to be root just once to fix the problem. Assuming x64sc (or whichever VICE component you're using) is in /usr/bin, you can give it raw network access with setcap cap_net_raw,cap_net_admin=eip /usr/bin/x64sc. Now you should be able to run it without root privileges and be able to access the raw interface. Here's a little test in Kipper BASIC:

Makes cross-development a lot easier!

Sunday, April 18, 2021

Don't be fooled by cheap USB multimeters

A fair number of computers people nowadays would refer to as vintage have USB either as an option or built-in, and USB ports crap out like everything else. Accordingly there are testers: at Big Box Hardware Store the other day while I was buying paint, they had one on clearance for just $3 each. That was worth picking up a couple to mess with.

Whoever wrote the package copy was either a slick advertiser or a liar, but I repeat myself. Among other things it bills itself as a "USB multimeter." This is barely technically true since it does measure both DC voltage and amperage, but is definitely not what you'd consider a typical multimeter. It also says on the back that it's "USB 3.0-3.1 Type A" yet it lacks the blue tongue and extra pins of a true Type A USB 3.x connector.

Still, it's cheap, and it will correctly tell you the voltage off the port (as tested with one of my real multimeters). This isn't enough to tell you if the whole shebang including data lines and signaling is working but it seems unlikely you'd have voltage but nothing else, assuming no monkey business like a "condom" was installed. If that's all you want, plus some reassurance the voltage you're getting is nominal, then this is $3 well spent.

However, what it doesn't accurately tell you, and apparently none of the similar small devices of this type will, is the available current. You'll be able to estimate the current draw by plugging something in the other end, but you won't be able to use it to tell if you're connected to a 1 amp or 2.1 amp port. There are USB testers that will put an adjustable constant load of however many amps on the line, and you can determine the available current by how high you can go before the lines sag, but while basic ones aren't exorbitant they certainly cost more than this one did.

You may be able to infer that a device is drawing more power than is available, but you'll need a powered hub to compare. For example, attaching my INOGENI VGA2USB3 showed exactly 60mA draw and a voltage of 4.99V as connected directly to this Raptor Talos II (and doesn't work). Connected to a powered USB 3.0 hub, however, it reads 5.14V and 550mA (and works). You wouldn't have any idea it's not enough without seeing how it performs connected to something else, and you can't assume the port only offers 60mA because the device may simply not draw anything when it fails to initialize. Likewise, the voltage difference probably isn't salient because the USB spec allows up to 5% variance under load, meaning even a voltage of 4.75V wouldn't necessarily be "sagging" per se.

Cheap USB testers like this aren't utterly useless but they're really more useful for confirming normal function rather than troubleshooting. If you get low voltage you'd still need to test the computer's power supply as well, and you can only correctly estimate a device's current demand if it's actually functioning and drawing power. You can't also conclude anything about the port's performance under a consistent load since it doesn't generate one. I don't think I wasted my money but you probably don't want to spend any more than that for such a limited device either.