Past GPU releases: fillrate is king and your drivers are cheating

I mean, the first IBM PCs (well before VGA) had video as a separate card, but everything was a separate card in those. Everyone figured out pretty early on that having seperate video circiutry was the only way to get decent (at the time) performance out of a machine, so they all had some form of it, even though there was very little going on with early video chips other than them being able to do their thing at the same time the CPU was doing something else. Even expanding to non-PCs (even though it's a generic term now, before the late 90s it explicitly meant IBM x86 machines or compatibles), they all had discrete graphics hardware of one kind or another, sometimes integrated onto one board or sometimes replaceable. Nothing really built up a big ecosystem of replacement video cards before the PC though, computer platforms just didn't last long enough back then. Even though the popular ones like the C64 sold for many years, the state of the art changed fast, and if you wanted the best performance and graphics you could buy into a completely new platform basically every year until the late 80s when the PC, Mac, and Amiga started crowding everyone else out. You have to remember, there was no such thing as video drivers or even standards until CGA/EGA/VGA, so if you got a new video card for your random 80s microcomputer (which did exist for some of them), it was probably pretty useless outside a few specific games or peices of software that explicitly supported it.

IBM wasn't interested at all in graphics or games, so third parties had free reign to innovate on video cards without worrying about IBM stepping on them (IBM stuff was also stupidly overpriced, so they weren't really an option for most consumers). Once IBM lost control of "the PC" it was just a free for all with hardware that mostly, sometimes, worked together with everything else, if you were lucky, and had the right software. It's a little annoying with the current Intel/AMD/Nvidia/MS control over everything PC, but it's sure easier a hell of a lot easier to just buy things and have them work as advertised.

(y) Sorry, only saw your reply now. It's weird, I swear your reply wasn't there the last time I posted here.
 

malor

Ars Legatus Legionis
16,093
Yes, those other early machines had integrated video chips (before the 286 I had a 8 bit PC, an amazing little machine, so limited in its graphics capabilities but at the same time, so awesome, at least for little kid me). I was thinking specifically of the "modern" x86 machines, 286 and later, from VGA onwards, but wasn't clear when writing it.

ETA: Basically, my curiosity was about when did the dGPU became a thing. From what I've been reading, it was when different companies started trying to expand the VGA standard and then started the arms race that took us here.
IIRC, dGPUs really started in the Windows 3.1 era, when cards started including hardware acceleration functions for Windows draw calls. Hardware-drawn windows were much snappier than software-drawn ones.

I'm a little blurry on the various card generations, but I think that would have been early SVGA, 640x480x256 and higher. Windows draw acceleration was routine by 1024x768x256.

3D acceleration was a whole different kettle of fish. I believe it started later than Windows acceleration, and at first was only 3dfx with their (DOS-only??) Glide system.

I think 3dfx was the first company to try to unite the two forms of acceleration, and ultimately that sunk them, because they put too much effort into accelerating oddball Windows 2D functions, instead of iterating on their core 3D technology. NVidia kinda stomped all over everyone with raw raster fill rate, which was really good for early Man in Warehouse games. They weren't as good as 3dfx at pushing lots of polygons, but not very many games used tons of small polygons, so 3dfx's strengths were much less apparent.
 
  • Like
Reactions: SuinusLatinus
I think 3dfx was the first company to try to unite the two forms of acceleration, and ultimately that sunk them, because they put too much effort into accelerating oddball Windows 2D functions, instead of iterating on their core 3D technology. NVidia kinda stomped all over everyone with raw raster fill rate, which was really good for early Man in Warehouse games. They weren't as good as 3dfx at pushing lots of polygons, but not very many games used tons of small polygons, so 3dfx's strengths were much less apparent.
3dfx Voodoo Rush launched October 1996, Riva 128 launched April 1997, however Voodoo Rush was a poor product that did not do well and it had abysmal 2D acceleration that was well behind competing 2D solutions and the 3D part was less powerful than a Voodoo 1 due to sharing with the 2D part. It was a bad frankenproduct.

3dfx Voodoo Banshee launched June 1998 and was far better than Rush, however, nvidia Riva TNT launched March 1998 and Riva 128 was already on the market.
 

cogwheel

Ars Tribunus Angusticlavius
6,691
Subscriptor
I think 3dfx was the first company to try to unite the two forms of acceleration
Nope, far from it. Before 3dfx released the Banshee, ATi had four generations (original Mach64 in the 3D Rage card, Rage2 in the 3D Rage IIc card, Rage3 in the Rage Pro card, and Rage4 in the Rage 128 cards), S3 had both the ViRGE (multiple versions) and the Savage 3D, Matrox had the Millenium (3D is debatable), Millenium II, and Millenium G200, plus Mystique equivalents, Rendition had the Vérité V1000 and V2x00, Number Nine (remember them?) had the Imagine 128 II (also debatable) and Revolution 3D/Ticket to Ride, and finally Nvidia had the NV1 (this was a 2D/3D chip, but the 3D was weird compared to contemporaries), the Riva 128, and Riva TNT.

Amusingly enough, the Voodoo 1 wasn't even the first consumer 3D card, either. The Nvidia NV1 (Diamond Edge 3D) was probably the first PC 3D accelerator, but the PowerVR PCX1 (VideoLogic Apocalypse 3D) was slightly ahead of the Voodoo 1 and the Rendition Vérité V1000 (Sierra Screamin' 3D) released at the same time.

The Trio name was used twice, for arguably different products. The Trio64 family was a late 2D only highly integrated graphics chip (for example, it didn't need an external RAMDAC), while the Trio3D was a late variant of the ViRGE used for very low end cards (it was contemporary with the significantly more advanced Savage 3D).

3dfx Voodoo Rush launched October 1996
This shouldn't count because it was nothing but a slightly modified Voodoo 1 chipset (doing only the 3D work) on the same PCB as a low end 2D graphics card. The Banshee was the first 3dfx chip that could do both 2D and 3D acceleration.
 

malor

Ars Legatus Legionis
16,093
Riva TNT.
Thanks for the corrections.

I definitely remember that this chipset was NVidia's first big stompy move in the nascent 3D market, where they began to make everyone but ATI irrelevant. Almost everyone who really paid attention to hardware wanted this chipset, and they just kept improving after that.

The TNTs (and then the first GeForces, I think?) ended up so dominant that the game Total Annihilation: Kingdoms was criticized very heavily for running poorly. It did, in fact, run pretty badly on the TNT series. It ran beautifully on 3dfx chips, but so few people had those by then that TA:K ended up with a poor reputation and low sales. This is related to my observation upthread: early NVidia chipsets were the masters of fill rate, but poor on complex geometry, and that's what TA:K was doing. If you wanted your game to be successful in those years, it already needed to run well on NVidia cards.
 

cogwheel

Ars Tribunus Angusticlavius
6,691
Subscriptor
This is related to my observation upthread: early NVidia chipsets were the masters of fill rate, but poor on complex geometry, and that's what TA:K was doing. If you wanted your game to be successful in those years, it already needed to run well on NVidia cards.
Uh, you're talking about pre-GeForce cards here - no consumer graphics card did any geometry processing; they were all pure fill rate and nothing else. Differentiation was primarily in the configuration of their texture pipelines, e.g. the Voodoo2 was one pipeline that could do two textures per pixel per pass, while the Riva TNT had two pipelines which could each do one texture per pixel per pass, so on single texture graphics the TNT was theoretically much faster, but in reality with multitexturing games they were much closer in performance. There was also color depth - the 3dfx stuff was 16-bit color output only (though it was computed at a higher precision internally then downsampled to 16-bit when written to the frambuffer), while Nvidia's stuff could natively do 32-bit color. 3dfx had a notably better 16-bit color implementation, but they clung to it into the Voodoo3 days when everyone else was already moving to 32-bit.

3dfx' downfall was as much their own making (underwhelming Voodoo3, late Voodoo5 that was supposed to compete with GeForce 256 but ended up going against GeForce 2 and losing instead) as it was Nvidia's making solid products.
 
  • Like
Reactions: grstanford

malor

Ars Legatus Legionis
16,093
Where did glide fit into all this? Was that 3dfx's proprietary tech?
IIRC, Glide was a limited subset of OpenGL, altered somewhat to match the underlying 3dfx hardware. I think maybe the first versions were DOS-only, and then they ported it to Windows. I'm really not sure about that, however, it could have been Windows-native from the start. ISTR Tomb Raider, the first real Glide game I played, running from DOS. (and holy shit was that an upgrade vs. software rendering.)

Either way, most game development moved to Windows, and Glide was a strong competitive advantage for awhile. But then other companies started supporting OpenGL, which I gather was a lot more powerful and flexible. NVidia's OpenGL implementations have always been exceptionally performant and accurate, so Glide died out, as it had far fewer features, and required the hardware to do things in specific ways.

From the Wikipedia article, apparently 3dfx was super litigious about Glide wrappers and emulators, but then open sourced it just before being acquired by NVidia. That's a lot of why those old Glide games still work well. DOSBox-X, for example, emulates a 3dfx card, which would have been a lot harder without the open source release of its driver software.
 
IIRC, Glide was a limited subset of OpenGL, altered somewhat to match the underlying 3dfx hardware. I think maybe the first versions were DOS-only, and then they ported it to Windows. I'm really not sure about that, however, it could have been Windows-native from the start. ISTR Tomb Raider, the first real Glide game I played, running from DOS. (and holy shit was that an upgrade vs. software rendering.)
Thinking back of playing Extreme Assault and after some cursory glance at the Glide wrapper community [1, 2, 3] it looks as if there never was real DOS support for Glide, instead some DOS games received custom executables to call the Windows drivers. How they called the drivers and which ones were called exactly was inconsistent, leading to issues with wrappers today.

Wild times.

This also reminds me that I need to play Extreme Assault again.
 
Last edited:
  • Like
Reactions: malor

nathan a.

Ars Scholae Palatinae
1,267
Memories. Thinking back to when I surrendered and swapped out my 4+8MB 3dfx Voodoo2 for a 16MB Riva TNT. I felt like a traitor but any advantage in Action Quake ... ;-)

edit: was misremembering the Voodoo1 vs. 2 memory sizing .. what an experience that first 3dfx card was. the first launch of GLQuake I think I just watched the little dancing fireball for a good long time. :D
 

Ulf

Ars Legatus Legionis
12,551
Subscriptor++
IIRC, Glide was a limited subset of OpenGL, altered somewhat to match the underlying 3dfx hardware. I think maybe the first versions were DOS-only, and then they ported it to Windows. I'm really not sure about that, however, it could have been Windows-native from the start. ISTR Tomb Raider, the first real Glide game I played, running from DOS. (and holy shit was that an upgrade vs. software rendering.)
On my Mac at least, running Unreal in RAVE meant 512x384 with half-scan lines enabled. Going to a 12 MB Voodoo2 jumped me to full 800x600. It was ... insane.
 
  • Like
Reactions: malor
I used to run a network of 3D gaming sites in that time period and reviewed every 3D accelerator from the S3 virge and rendition verite through the GeForce 4 era, so I have a lot of nostalgia for 3dfx myself. My parents still have that stuff boxed in the basement-- I feel like the Canopus Pure3D will be worth something someday in particular. Canopus was so choice.

Matrox Millenium G400 would probably be worth something too, I can't imagine they sold too many of 'em. Hardware moved so fast back then, it was an exciting time. Every year speed doubled.
 

grommit!

Ars Legatus Legionis
19,295
Subscriptor++
I don't pretend to understand most of this analysis, but it doesn't sound like further nvidia driver updates are going to improve Starfield performance much:
In summary there’s no single explanation for RDNA 3’s relative overperformance in Starfield. Higher occupancy and higher L2 bandwidth both play a role, as does RDNA 3’s higher frontend clock. However, there’s really nothing wrong with Nvidia’s performance in this game, as some comments around the internet might suggest. Lower utilization is by design in Nvidia’s architecture.
 
My read was that driver updates alone won't be sufficient, it will take Nvidia's developer relations team working with Bethesda to change their rendering pipeline on Nvidia to close that gap. That sounds scary but it's what happens with pretty much every game other than Starfield way before release. Since Nvidia is 80%+ of the market, landmark games never skip it. Well, never say never.
 

hobold

Ars Tribunus Militum
2,657
AMD's and Nvidia's architectures are not drastically different from each other anymore these days. Sure, there are differences in details, but generally it should be extremely rare that any one game engine is ever skewed drastically to one side.

So rare, in fact, that it should be almost impossible to design this on purpose. Because if you target exclusively glass chins of one architecture, you probably hurt performance of the favoured architecture as well. One could design a very biased benchmark, but its algorithms would be so contorted that it wouldn't resemble real game engines.

In other words, IMHO, I think that Nvidia can catch up (if they put in the effort), even though I think that the analysis of chips and cheese seems pretty good.
 

grommit!

Ars Legatus Legionis
19,295
Subscriptor++

grommit!

Ars Legatus Legionis
19,295
Subscriptor++
It's been a decade since I last had an AMD card, and I'm considering getting a 7800XT. However, the driver package looks very different now.
Decided that I didn't want to deal with AMD's drivers again, and with RTX 4070 prices dropping $40 locally, I picked one up.
Went from this - note how I'm entirely GPU bound
SOTR 3060 Ti.jpg

To this
SOTR 4070.jpg

For comparison, my old i7-7700K and GTX 1070 averaged 50 FPS
Granted, this is a five year old game, but I may need to reconsider a CPU upgrade :unsure:

FWIW, I did notice that the Asus Dual RTX 4070 (dual-fan) runs quieter than the triple-fan Asus TUF RTX 3060 Ti in the same benchmark.
 
  • Like
Reactions: continuum

malor

Ars Legatus Legionis
16,093
Granted, this is a five year old game, but I may need to reconsider a CPU upgrade :unsure:
Your 'min' rate on both CPU and GPU are very close, so you're probably very near correct overall balance ATM. A faster CPU might not do that much. You'd gain a little, but would then bind on the GPU again almost at once.

I don't know if this is still true, but at least two or three years back, NVidia GPU drivers tended to run faster on Intel than on AMD hardware. You got a pretty noticeable boost simply from running Intel, even chips that were noticeably inferior to AMD in other ways. I've sometimes wondered if that was NVidia trying to subtly sandbag their main competitor's other market.

So, if you're presently AMD-based, upgrading to a faster Intel chip could potentially give you the double whammy of running both the game code and the driver code faster, giving you boosts on both sides of the ledger.
 
I don't know if this is still true, but at least two or three years back, NVidia GPU drivers tended to run faster on Intel than on AMD hardware. You got a pretty noticeable boost simply from running Intel, even chips that were noticeably inferior to AMD in other ways. I've sometimes wondered if that was NVidia trying to subtly sandbag their main competitor's other market.
This is just ridiculous nonsense! Remember nForce?! It was a chipset primarily designed for AMD cpu's and it only supported intel in the original XBOX and towards the very end of its life for conumer motherboards.

AMD and their K7 & K8 CPU's would have had much less success than they experienced without nForce supporting them!
 

grommit!

Ars Legatus Legionis
19,295
Subscriptor++
Your 'min' rate on both CPU and GPU are very close, so you're probably very near correct overall balance ATM. A faster CPU might not do that much. You'd gain a little, but would then bind on the GPU again almost at once.
This does seem an outlier in terms of CPU usage among the games I already have installed. The concern is for Starfield, which is reportedly GPU intensive in open areas, but CPU intensive in cities. I might be able to live with the latter if there isn't much combat, but would prefer to have a more consistent experience across the whole game. I'll have to see whether the Raptor lake refresh is a worthwhile upgrade, but it's going to need to be substantial to justify the cost of a platform change.

Anyhow, back to GPU's, the benchmark result for the GTX 1070:
1070 sotr.jpg
How times have changed :sneaky:
 
Last edited:

IceStorm

Ars Legatus Legionis
24,871
Moderator
I'll have to see whether the Raptor lake refresh is a worthwhile upgrade, but it's going to need to be substantial to justify the cost of a platform change.
If you're on a 5600X, at this point it's not worth buying anything new.

  • AM5 is still shaking out, all these months later. Might as well just wait for 8000 series vcache parts.
  • Raptor Lake Refresh is Socket 1700's last hurrah and won't be much faster than Raptor Lake.
  • You're on a 4070, not a 4090. Your GPU runs out of gas around the same time your CPU does. If you had a 4080 or 4090, I'd say upgrade.

I have played Starfield across a 5800X3D/4090, 5600X/3060 Ti, 12700K/3080 Ti, 5800X3D/6700 XT, 8750H/mobile 2070, the Steam Deck, and occasional abortive sessions on a 12600K/Arc A770 LE (game crashes with Arc), with all but the Steam Deck at 1080p. While the 5800X3D/4090 handles it best, nothing is exceptionally "good" at Starfield. The 5600X/3060 Ti handle it decently enough at 1080p with a DLSS mod and equivalent settings to DLSS quality or performance tiers. It didn't go below high 30's for me, and it's when it goes down to low 30's or into the 20's where gun combat can feel "bad". High 30's and up, it's fine.
 

grommit!

Ars Legatus Legionis
19,295
Subscriptor++
If you're on a 5600X, at this point it's not worth buying anything new.

  • AM5 is still shaking out, all these months later. Might as well just wait for 8000 series vcache parts.
  • Raptor Lake Refresh is Socket 1700's last hurrah and won't be much faster than Raptor Lake.
  • You're on a 4070, not a 4090. Your GPU runs out of gas around the same time your CPU does. If you had a 4080 or 4090, I'd say upgrade.
Yeah, I'm leaning towards a 5800X3D to keep me ticking over until 2025. Neither AM5 or Socket 1700 seem to offer enough to justify the hassle.
I have played Starfield across a 5800X3D/4090, 5600X/3060 Ti, 12700K/3080 Ti, 5800X3D/6700 XT, 8750H/mobile 2070, the Steam Deck, and occasional abortive sessions on a 12600K/Arc A770 LE (game crashes with Arc), with all but the Steam Deck at 1080p. While the 5800X3D/4090 handles it best, nothing is exceptionally "good" at Starfield. The 5600X/3060 Ti handle it decently enough at 1080p with a DLSS mod and equivalent settings to DLSS quality or performance tiers. It didn't go below high 30's for me, and it's when it goes down to low 30's or into the 20's where gun combat can feel "bad". High 30's and up, it's fine.
Any thoughts on my perception that there isn't much combat in the cities (i.e. the CPU intensive areas)?
 

IceStorm

Ars Legatus Legionis
24,871
Moderator
If you can nab a 5800X3D for $270 or less, I'd go for it (and did a month ago).

There will be at least once instance of combat in a large open city area for the story from what I've seen in play throughs. Outside that, unless you personally start shooting up the joint, there won't be. Combat in "cities" is in side levels, not the main thoroughfare. You can also just drop the crowd density to low before landing or zoning into a large city area if it seems to be bogging down.
 
  • Like
Reactions: grommit!
Nforce was mostly before AMD bough ATi, though. There were persistent rumors that AMD and Nvidia would merge back then, and AMD only bought ATi when those talks fell through.
What you say is correct, but, it doesn't affect what I stated all, never mind the fact a lot of enthusiasts in the DIY PC sector build AMD based PC and then put highend nvidia cards in them. nvidia would be hurting their own customers by nerfing their driver performance on these machines and hat is more it would get noticed and reported on by the likes of hardware unboxed, gamers nexus etc who test on both AMD andintel rigs.

as I said in my prior post the whole concept is ridiculous nonsense with no basis in reality.
 

mpat

Ars Praefectus
5,951
Subscriptor
What you say is correct, but, it doesn't affect what I stated all, never mind the fact a lot of enthusiasts in the DIY PC sector build AMD based PC and then put highend nvidia cards in them. nvidia would be hurting their own customers by nerfing their driver performance on these machines and hat is more it would get noticed and reported on by the likes of hardware unboxed, gamers nexus etc who test on both AMD andintel rigs.

as I said in my prior post the whole concept is ridiculous nonsense with no basis in reality.
I don’t have any information one way or the other, and even if the drivers ran slower on AMD hardware, I would blame laziness or prioritization before pettiness. If there are more Intel chips in enthusiast rigs - as was the case before the Ryzen launch certainly, not certain about right now - it makes sense for Nvidia to spend more time optimizing for that case. I merely pointed out that the Nforce chipsets shouldn’t be taken as an indication one way or the other in 2023 because there has been a lot of water under those bridges by now.
 
I don’t have any information one way or the other, and even if the drivers ran slower on AMD hardware, I would blame laziness or prioritization before pettiness. If there are more Intel chips in enthusiast rigs - as was the case before the Ryzen launch certainly, not certain about right nowf - it makes sense for Nvidia to spend more time optimizing for that case. I merely pointed out that the Nforce chipsets shouldn’t be taken as an indication one way or the other in 2023 because there has been a lot of water under those bridges by now.
I think AMD supporters are way too quick to ascribe the actions of AMD to AMD's competitors (it was AMD that blocked DLSS in Starfield, nvidia and intel did nothing wrong or untoward).

Also it took until Zen 2 to truly equal intel processors in gaming - any slowdown vs intel is/was simply due to AMD releasing immature versions of Zen into the market and that is something you need to blame them for, not intel and not nvidia!
 
Nice thread. My first machine was an Acorn Electron, which probably nobody on Ars has heard of. It had a whopping 32kb of RAM, and 32kb of ROM to hold the OS though at that time I’m not sure if the term ‘OS’ was a thing for tiny computers like the Electron.

It had some nice games which I played the shit out of, loaded via a standard audio cassette in a standard music tape player, with the headphone output cabled to the Acorn’s input port. I built up quite a nice little collection of game tapes, some pirated, which I kept in a little suitcase. My favourite game was Elite which I played obsessively. As a deaf child I didn’t have a lot else to do - I was not allowed to learn signing so few other social activities were open to me *.

(I was a kickstarter funder of the modern version but somehow have never played the modern version more than 5 minutes - just can’t get into it.)

Elite’s concept of almost infinite worlds and full 3D space combat was somehow compressed into just 32kb. All the spaceships even had their own combat AI and reacted in different ways, with different skill levels, to combat or intimidation. I remember fighting 6 or 7 enemy ships at the same time in 3D space, and I look back and wonder how it was done.

The Acorn was produced by Acorn Computers, a tiny obscure British outfit, but through various evolutions their tech became part of a slightly better known outfit called ARM which some Ars people have likely heard of. Acorn was the A in ARM.

Now my latest machine is a Mac Mini M2 Pro so I feel I’ve come full circle in some way and that there is a tiny bit of connection between the ancient cute mini computer in my attic and the shiny new cute box on my desk.

The images of GPUs in the first pic in this thread brought back some memories. When I was young and poor and radical, I used to put together PCs for various protest groups. People would bring me piles of PCs scavenged from computers thrown out from companies or left on the street by individuals, and I would try to reassemble them, sometimes successfully, into PCs for various activists. So I’ve probably touched most of the graphics cards listed in this thread. I remember the voodoo cards and the matrox cards. I used to skim off the best performing card and replace whatever was in my own PC with that card.

Games I played in this era were stuff like Doom, Quake, Descent, Carmageddon, Age of Empires. Not a lot else. This was before flash games were a thing and I didn’t have strong enough internet or fast enough reflexes or a good enough computer for multiplayer. I remember when Wi-Fi came out and I actually made a cantenna from a Pringles can in one of these activist workshops - I don’t think I ever actually used it - but there were some heady new ideas going around about community mesh networks.

Now in the modern era I have my Mac Mini M2 Pro, and a homebrew miniITX windows box with a 1060 which hasn’t been used in around a year, and a Founders Discount sub to GeForce Now 4080 tier, which I rarely use as I can’t seem to get into modern games. I should cancel the GFN sub but having it available with my gigabit optic fibre connection makes me feel good l. I love having the option of booting up some 4K splendour at max GPU settings so I’m keeping it for a while.

( * I’m a fluent signer now and work professionally with sign languages so a big FUCK YOU to all the people who told my parents that deaf kids shouldn’t learn signing. There’s still a lot of this around even in 2023.)
 
Last edited:

malor

Ars Legatus Legionis
16,093
Nice thread. My first machine was an Acorn Electron, which probably nobody on Ars has heard of. It had a whopping 32kb of RAM, and 32kb of ROM to hold the OS though at that time I’m not sure if the term ‘OS’ was a thing for tiny computers like the Electron.
Heh, you sound like a computer hipster.

The Electron was a cut-down BBC Micro, and most people have probably at least heard of that machine. They may not know that it was probably the overall best of the 8-bit computers, though we couldn't meaningfully use them in the US, because they were pretty closely married to PAL video. They tried an NTSC version of the Beeb, but hardly any software would run correctly because the screen was smaller, so they failed pretty badly here.

The ROMs in those machines were incredibly good, apparently mostly written by Sophie Wilson. They offered a plethora of services, and had ROM sockets with pluggable programs and even filesystems. And the machine had a 2MHz 6502 with 4MHz RAM, so it did the typical CPU/video chip interleave, but twice as fast as American computers did it. And then it had BBC BASIC, also a Wilson thing, which was much faster than any other interpreted BASIC: combined with the double-speed CPU, it was a real monster for performance. It made the IBM PC look very slow indeed from the point of view of a BASIC programmer.

And then it had the Tube slot, where you could plug in a new processor and turn the rest of the computer into an I/O device to serve it. If you'd followed the programming guidelines and called into the ROM at the correct entry points for the services you wanted, the replacement ROM on the Tube card would call into the main computer to do whatever needed to be done, so that most programs would run fine, but much faster. The first versions of the ARM chips were made for the Tube.

The Electron had the same software library, but its architecture was cut way down, so that it had only a small amount of fast RAM, and no Tube slot. It would run a lot of the Micro's rich software library, but substantially slower than the original machine. But it was a lot cheaper; the main Micro was a very expensive computer.
 
Last edited: