“Artificial Intelligence” as we know it today is, at best, a misnomer. The AI is by no means intelligent, but it is artificial. It remains one of the hottest topics in the industry and is enjoying renewed interest in academia. It’s nothing new – the world has gone through a series of AI peaks and valleys over the past 50 years. But what makes the current wave of AI success different is that modern computing hardware is finally powerful enough to fully implement some crazy ideas that have been around for a long time.
Back in the 1950s, in the early days of what we now call artificial intelligence, there was a debate about what to call this field. Herbert Simon, co-developer of both the Logic Theory Machine and the General Problem Solver, argued that the field should bear the much more innocuous name “complex information processing”. It certainly doesn’t inspire the fear of “artificial intelligence,” nor does it convey the idea that machines can think like humans.
However, “complex information processing” is a much better description of what artificial intelligence really is: analyzing complex data sets and attempting to make inferences from the stack. Some modern examples of AI include voice recognition (in the form of virtual assistants like Siri or Alexa) and systems that figure out what’s in a photo or recommend what to buy or watch next. None of these examples compare to human intelligence, but they show that we can do remarkable things with enough information processing.
Whether we call this field “complex information processing” or “artificial intelligence” (or the more ominously Skynet-sounding “machine learning”) is irrelevant. Huge amounts of human labor and ingenuity have gone into creating absolutely amazing apps. As an example, look at GPT-3, a deep learning model for natural languages that can generate text that is indistinguishable from human-written text (but can also be hilariously wrong). It is supported by a neural network model that uses over 170 billion parameters to model human language.
Built on top of GPT-3 is the tool named Dall-E, which will produce an image of any fantastic thing requested by a user. The updated 2022 version of the tool, Dall-E 2, takes it even further, as it can “understand” quite abstract styles and concepts. For example, asking Dall-E to visualize “an astronaut riding a horse in the style of Andy Warhol” will produce a number of images such as this:
Dall-E 2 does not perform a Google search to find a similar image; it creates an image based on its internal model. This is a new image constructed from nothing but math.
Not all AI applications are as revolutionary as these. AI and machine learning find applications in almost every industry. Machine learning is quickly becoming a staple in many industries, powering everything from recommendation engines in the retail sector to pipeline safety in the oil and gas industry, to diagnostics and data privacy. patients in the healthcare industry. Not every company has the resources to build tools like Dall-E from scratch, so there’s a strong demand for affordable and accessible tool sets. The challenge of meeting this demand has parallels to the early days of enterprise computing, when computers and computer programs were rapidly becoming the technology companies needed. While not everyone needs to develop the next programming language or operating system, many companies want to leverage the power of these new fields of study and need similar tools to help them.