Isaac Asimov, Robots, AI, and Luddism

September 24, 2025

Book cover for 'The Caves of Steel'

I just finished reading The Caves of Steel (1953) by Isaac Asimov. I really enjoyed it. I guess it's a little weird to write a review for a 70 year old book, so instead I decided to put together whatever the hell this post is. We go over the book, but then also its implications on modern society, as well as draw-backs to the 19th century Luddite movement that occurred during the Industrial Revolution. Hope you enjoy, it's a little non-traditional.

The plot

So this book was actually my first Asimov novel. The plot follows that of one Lije Baley, a detective who lives and works in the heavily socially stratified big city of New York. The highly dense and compact nature of this city in the future means that most people don't have kitchens or bathrooms in their own houses or apartments, and must go to communal places for this. The entire city is encased in a dome, and shielded from the air and sunlight of the outside world.

Understandably, this unnatural way of living lends itself to some backlash movements. The Medievalists, as they're called, want a return to a time they refer to as the "Medieval" age. It becomes clear that what they mean by this is the 20th century or so, which to them is long, long past.

This society is slowly being introduced to some very primitive robots — working in retail centers, and one in the detective office. All of them are quite dull, unable to perform advanced tasks. Even still, this represents a threat to the labor force, especially in a society such as this, where getting fired means decent from all of the comforts in life that you know, total abandonment at the bottom of society. (Are we sure this was written in the 1950s not the 2020s?) Anyways, this fear of unemployment creates a sense of Luddism within the minds of many Medievalists or even just regular folks.

https://thetechbubble.substack.com/p/on-the-origins-of-dunes-butlerian

On top of all this, the culture of Earth must tolerate some pompous, upper class society, called the "Spacers", who live, you guessed it, up in space right above Earth. The Spacers are of high social standing, and tensions between the cultures are high, not least because of the push by the Spacers to introduce robots into the Earth culture.

It is inside of this context that one of the Spacers are murdered within their own home in Spacetown. The Spacers expect it to be someone from Earth. They start up a joint investigation, with Lije Baley, an Earth man, being partnered up with R. Daneel Olivaw, a very advanced robot from Spacetown. In fact, Daneel even passes for a human upon reasonably close inspection.

The investigation follows through looking into Medievalist secret societies, conspiracies, corruption, and Luddite aversion to automation technology which threatens the livelihood of the working class. By the end of the novel, Lije comes to change his view from that of a vague Medievalist, instead to believe in the cooperation between robots and humans, a transition to a so-called "C/Fe society" (Carbon-based life integrating with Iron-based life).

Luddism

The word Luddism is, of course, not actually used within the novel, but I mention it because I feel like the themes are overtly present in the Medievalist backlash to robots working positions once held by humans. In the real world, Luddites were a group of textile makers during the industrial revolution. They stood opposed to the ways in which textile automation technology was being used. More specifically, the tools were being used to take power away from the textile workers, and it was outputting lower quality products for consumers than when it was all just made by hand. Luddites had no issue with the idea of automation. What they didn't like was the way it was being used as a means by the owning upper-class to take away power from the poor working-class.

In truth, Luddism and science fiction concern themselves with the same questions: not merely what the technology does, but who it does it for and who it does it to.

 — Cory Doctorow, Locus Magazine

https://locusmag.com/feature/cory-doctorow-science-fiction-is-a-luddite-literature/

Even back in the 1950s, the awareness of where computers were headed was presently in the minds of many, who perhaps might have reason to worry about being replaced and laid off. In a certain capacity, those fears were justified, as the computer would become more and more advanced, and take a seat in various positions once ran by humans. Of course, at the same time, it brought in a whole new set of positions needing humans to fill. So life goes on.

The bringing in of the computer was in some ways used as a threat to workers, but for the most part, it was simply done out of pragmatism. Humans tend to be pretty bad at performing repetitive, pattern-based actions which do not require discretion. On the other hand, computers are pretty good at these kinds of activities. What computers aren't good at is activities that require a certain level of situational context and discretion. Computers aren't good at determining whether or not you're returning this library book late for good reasons or not. They just know that its late. That's why we created self-checkout machines at the library, but still have a staff of librarians which can be a human to help you escape the strict rigidity of the computer bureaucracy. They can listen to you explain that your car broke down so you couldn't get it returned, and they can override the fees on your account, giving you amnesty. A computer is not suited towards that kind of task.

Right now, AI is being peddled as some kind of technology that attempts to replace humans in their everyday jobs. At the same time, AI is pretty bad at any task where it needs to replace a human. Humans just perform too many small, discretionary decisions that current AI models are simply not possible of even realizing they should consider. The models just don't compare to the human mind if they're being made to try to replace the human mind. That's not what this tool is ever going to do, not in the near future anyways.

But AI is not useless. AI has potential for assisting humans, when it's done organically. When your boss fires half your department, but tells you that you need to offset the workload onto "AI integrations" or something like that, the AI is going to perform badly. The department is going to put out shit content compared to what they used to. On the other hand, when you just have a thing you need to do, and you realize that an AI can make a template for this thoughtless task to help speed it up, that's a genuine boost. But those boosts are, much to the chagrin of the employer class, not actually that dramatic, and definitely don't create the kind of productivity boosts that allow you to fire half of every department.

In his blog, Cory Doctorow often explores the ideas of "Centaurs" and "Reverse-Centaurs" when it comes to automation technology. A centaur is a person who, in the natural course of their duties, finds a productive way to use the automation technology to offload some of their own work, making them more productive at the job they've always been doing. It lets them get more done, but at the end of the day, they are free to use the automation tool where they feel it is appropriate, and abandon it in instances where it is not useful or needed. They are given control and discretion over its use. A centaur is a human standing atop a robot body which they can control.

https://pluralistic.net/2025/09/11/vulgar-thatcherism/

A reverse-centaur, on the other hand, is a squishy human being piloted and controlled by the robot! The reverse-centaur is a worker whose boss just fired a bunch of their coworkers, and expects them to pick up the slack by using AI. When the quality of the work goes down, the boss will either not care, or, worse still, they might blame the remaining workers for being so careless, despite the ludicrously cramped deadlines and under-staffing.

Weren't we supposed to be talking about Isaac Asimov?

Right. Back to the book.

It's important to remember that the conclusion of the book involved an acceptance that humans need to learn to cooperate with robots. In the backdrop of the 1950s, it might be easy to mistake calls for robot integration with that of racial integration that were going on at the time. This was my initial viewpoint of the themes of the so-called "C/Fe culture" that the pro-robot faction was calling for. Upon further inspection, it becomes clear that this is not the true nature of things, and the comparison really isn't fair. These robots do not form a culture of their own. The robots are not sentient, and they do not pretend to be. The robots are machines, and do not have desires of their own, besides in how they seek to serve humans.

Despite this, the characters in the book often saw the "C/Fe culture" stuff as being a cultural mixing, notably the Medievalists and others of the anti-robot factions. They feared that these newfangled robots would demand to fill human jobs and be treated with humanity. All this on top of their utter superiority at performing certain kinds of tasks that humans could never hope to achieve (think: basically anything a computer can do better than a human, but then give that computer opposable thumbs and a body to walk around in).

This is the mistaken way that the Earth society looks at robots in. They see them as a threat, as some kind of culture of their own. Oftentimes within the book, Daneel Olivaw — the advanced robot from Spacetown — would shock the humans around him by his utter lack of care for human prejudice against robots. He is not sentient. He does not have feelings to hurt. When Lije Baley admitted that he was prejudiced against robots, Daneel said it didn't matter, so long as they could work together effectively. The robot had no sense of cultural identity that needed to be defended from trampling.

At one point, a Medievalist radical took a swing against Daneel. The robot didn't see it coming, but did his best to back away to soften the blow — not to himself, but to protect the hand of the attacker from being hurt by his metal frame. He didn't care that he was being attacked, he just needed to follow the First Law of Robotics: to protect the human.

At other times, people would express hatred towards the more primitive robots in front of Daneel, then apologize, as if they had offended him. He told them he didn't care, he couldn't be offended even if he tried — he's a robot.

What does this tell us about automation and technology?

The answer is that technology does not inherently threaten humanity. It simply doesn't. Technology is created by humans, in order to best suit humans. Within this story, we are to believe that the people who create and distribute the robots (Spacetown) genuinely hold the best interests of the Earthmen in mind when creating their robots. This is the premise upon which the acceptance of all of this lies. This suits the story well, though it would incorrect to say that it is the only way it could have been.

If, for instance, the story took place in a society that had a fairly strong working class, and which the owning class wanted to assert their dominance by breaking up unions, firing workers for expressing discontent with their conditions, and otherwise just putting down the labor force, the story might be different. It might then be more plausible to say that the owning class is pushing robots down on society in order to take away agency from the working class. In that case, the argument that "by just simply cooperating with the robots, humanity (and the working class) can be better off" would fall flat. You cannot cooperate with a technology that was expressly designed to destroy you.

And this is where we stand in our real world in the year 2025. We must ask ourselves who it is that is designing these so-called "Artificial Intelligence" tools? Are they people who have the best interests of regular people, or are they designed by corporate tech monopolies, which have demonstrated time and time again that they will sacrifice the well-being of countless workers in order to further their power, wealth, and influence?

The AI technologies we are being fed at current time are created by money-hungry monopolists who would love nothing more than to have us all at their mercy. They are forced upon us (often, but not always) by bosses who want to speed up our work speed, while sacrificing both output quality and our own well-being, just in the name of increased profits. This has the result of decreasing quality of life for both consumers and employees.

But it doesn't have to be this way, not inherently. What we are witnessing is a result of our economic system, not a result of the technology itself. The extreme stratification of our economic environment is what leads all of this. AI models are massive and bulky, and are pretty damn difficult to run on cheap hardware, but as the economic incentives begin to make corporations try to get AI models to run on consumer hardware (be it out of the chase of the AI bubble, or simply a desire to get AI spyware running on everybody's smartphone), we're seeing models pop up that are actually reasonably competent that can run on low specs.

What we absolutely need right now, is for more of these reasonably sized tools to keep coming out. Open source models, runnable by anybody. Tools that aren't being gatekept behind a singular service provider, who can refuse access for particular uses it deems inappropriate. We've got a decent amount of tools like this, but it's limited. To be fair, the actual utility of AI in general is limited. But still, the idea that this technology keeps coming out and we're all just going to be better off for it all, even though it's all held up and controlled by our corporate overlords is ridiculous. If this is going to be a Good Technology, then it's going to need to be a technology that People Actually Own, not one that they rent access to, one prompt at a time. Even better than that would be if it were being designed by people who have a shared interest in the collective well-being.

I genuinely think that Machine Learning (it's not actually AI, btw, that would require a level of 'intelligence') has a lot of real potential, especially the LLMs. It's just that everybody has got their heads so far up their own asses about whatever the hell the AI bros think that it's going to revolutionize, that they can't see the potential of what it might actually be able to do if we just stopped fantasizing about firing everyone for just 10 seconds.

Wrap up

Well, not exactly a traditional book review post, but I feel like I covered all the important parts. Major themes, general plot, implied conclusions about technology, and also the way we can turn the text to a modern interpretation. I hope yall enjoyed this less-than-traditional post from me. I enjoyed reading this book, and looking at the way that robots were thought of before the major advent of the computer was interesting, in comparison to how we think of them now within pop culture.


Check this out

Some links after the blog