
In 1987, long before artificial intelligence became the mainstream obsession it is today, IT world convened a roundtable to discuss what was then a new and unresolved question: how AI might interact with database systems.
What makes the discussion remarkable in hindsight is not the optimism around AI, which was common at the time, but Ellison’s repeated emphasis on limitations.
While others describe AI as a new architectural layer or even a “new species” of software, Ellison argued that intelligence should be applied sparingly, deeply integrated, and never treated as a one-size-fits-all solution.
“Our primary interest at Oracle is applying expert systems technology to the needs of our own customer base,” Ellison said. “We are a database management systems company and our users are leading systems developers, programmers, systems analysts and MIS directors.”
This framing set the tone for everything that followed. Ellison was not interested in AI as new to the end user or as a standalone category. He saw it as an internal tool, intended to improve how systems are built rather than redefining which systems are.
Many vendors viewed expert systems as a way to fully replicate human decision-making. Kehler described systems that encoded experience and judgment to handle complex tasks such as underwriting or processing custom orders.
Landry went further, arguing that AI could provide the architecture for a whole new generation of applications, built as collections of cooperating expert systems.
Ellison pushed back on that notion, prompting moderator Esther Dyson to ask: “Your view of AI doesn’t seem to be quite the same as Tom Kehler’s, even though you have this so-called complementary relationship. It differentiates between AI application and database application, while you see AI simply as a tool for building databases and applications.
“Many expert systems are used to automate decision making,” Ellison responded. “But a systems analyst is also an expert. If you partially automate his function, you get another form of expert system.”
Ellison drew a clear line between processes that truly require judgment and those that do not. In doing so, he rejected what today might be called AI maximalism.
“In fact, not all app users are experts or even specialists,” he said. “For example, an order processing application might have dozens of employees processing simple orders. Instead of the order processing example, think about checking account processing. Now, there are no Christmas promotions on that. There are no special prices. Instead, performance is key and payback is key.”
God of Business | Software Billionaire | YouTube Documentary – YouTube
Look on it
“The height of nonsense”
When Dyson suggested a rule such as automatically transferring funds if an account balance fell below a threshold, Ellison was blunt.
“It can be done algorithmically because it doesn’t change,” he said. “The application will not change, and building it as an expert system, I think, is the height of absurdity.”
This was a striking statement in 1987, when expert systems were widely touted as the future of business software. Ellison went further, issuing a warning that sounds surprisingly modern.
“That’s why I say an entire generation is going to be built on nothing, but expert systems technology is a misuse of expert systems. I think expert systems should be used selectively. This is human expertise artificially performed by computers, and everything we do requires expertise.”
Rather than apply AI everywhere, Ellison wanted to focus it where it changed the economics or usability of developing the system itself. This led him to what he calls fifth-generation tools, not as programming languages, but as higher-level systems that eliminate procedural complexity.
“We see huge benefits in providing fifth-generation tools,” he said. “I don’t want to use the word ‘languages,’ because they’re not really programming languages anymore. They are more so.”
He described an interactive, declarative approach to building applications, in which intent replaces instruction.
“I can sit next to you, and you can tell me what your requirements are, and rather than documenting your requirements, I’ll sit down and build a system while we talk together, and you can look over my shoulder and say, ‘No, that’s not what I meant,’ and change things.”
The promise was not just speed, but also a change in who controlled the software.
“So it’s not just a change in productivity, a quantitative change, but also a qualitative change in how you approach the problem.”
Larry Ellison on the race to AI – YouTube
Look on it
Not anti-AI
This philosophy continued in Oracle’s subsequent product strategy, from early CASE tools to its eventual adoption of web-based architectures. A decade later, Ellison would argue just as forcefully that application logic belonged to servers, not PCs.
“We believe it’s better to have the applications and data on the server, even if you have a PC,” he said. IT world in 1997. “We think there will be almost no client/server demand once this comes out.” »
In 2000, he was even more direct.
“People are taking their applications off PCs and putting them on servers,” Ellison said, according to ZDNET. “The only things left on PCs are Office and games.”
In retrospect, Ellison’s predictions were often anticipated and sometimes exaggerated. Thin clients didn’t replace PCs, and expert systems didn’t transform business software overnight. Yet the direction he outlined proved enduring.
Application logic has moved to servers, browsers have become the dominant interface, and declarative tools have become a critical design goal across the industry.
What the 1987 roundtable highlights is the philosophical basis of this change. While others debated how much intelligence to add to applications, Ellison wondered what intelligence was.
He treated AI not as a destination, but as an implementation detail, useful only when it reduces complexity or improves leverage.
As AI once again dominates business strategy discussions, the caution inherent in Ellison’s early comments seems entirely relevant.
His main argument was not anti-AI, but anti-abstraction per se. Intelligence was important, but only when it served a larger architectural purpose.
In 1987, this goal was to make databases the center of application development. Decades later, the same instinct underpins modern cloud platforms. The technology has changed, but the tension Ellison identified remains: how much information systems need and how much complexity users are willing to tolerate to get it.
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.