This year’s AI Summit, a convergence of the brightest minds in AI from industry and academia, not only attracted a diverse array of vendors but also marked a pivotal moment in the evolution of AI technology. It was a gathering that reflected the industry’s rapid attention shift from traditional AI to Generative AI (GenAI) and the implications that such shift is having in the space.
This is a recap of how the industry is looking at the space as the year of the GenAI revolution comes to a close.
The shift from POC to POV
Some of the companies claimed to have hundreds of active proofs-of-concept (POC). These experimental projects seem to be focused on demonstrated feasibility—whether it is even possible to achieve a business goal through the use of AI. Some companies are taking this further, speaking of proof-of-value (POV) as a way to not only gauge feasibility but whether these projects are worthwhile carrying out.
Businesses measure value using proxies such as increased revenue or reduced cost. The cost incurred to achieve this value is also part of the equation, so it makes sense that a good experiment should measure value created and operating costs, not just feasibility.
The rise of responsible AI and synthetic data
An interesting contrast is deepening between the breakneck speed with which AI engineering is evolving and the thorough consideration that data privacy and ethics must be given to enact thoughtful governance. These two disciplines seem to live in completely different timescales, yet they must find a common ground to coexist in order to enable timely and responsible innovation.
One approach that senior technology leaders are exploring is the setting up internal teams that bring together cross functional expertise. These centers, such as AstraZeneca’s Accelerator and Estee Lauder’s AI Task Force, are pivotal in speeding up AI adoption within their organizations. These special teams tend to bring cross-functional expertise from technological merit to responsible governance. Moreover, leading organizations are enacting their own responsible AI frameworks to codify the guardrails necessary for AI to meet its value promise without the inherent risks.
Companies are also turning their attention to synthetic data, a type of data that, by creating a digital twin of the real world, provides the benefits without the associations to real people. This data has the potential to speed up innovation while helping companies comply with data protection regulation.
We don’t want more. We want precision.
While most presentations and vendors were touting the promise of Generative AI, a few pointed to perhaps an even more important priority. “No one wants more, we want more precision,” a presenter said during a talk about the promise and perils of Generative AI. They were referring to the generative nature of GenAI, whereby more and more content is being created artificially.
The fantastic capabilities of the technology seem to cast a hypnotical effect, leaving people awed by the seemingly magical powers of AI. This spell often leads to an underestimation of critical issues: not only is more information being generated in already data-saturated organizations, but this new information frequently contains inaccuracies, known in the industry as “hallucinations”.
Companies are rushing to minimize these hallucinations using a combination of techniques, some meritocratic, some less so, like asking another AI, also all-too-ready to hallucinate, if the information is accurate. However, very few are focusing on using AI to synthesize existing, high-quality information, and fewer still are doing so through carefully considered methods. For example, in highly regulated industries such as pharma, it could, for example, help drugmakers better understand patients’ experiences with pain.
We have been hearing more about this shift from “GenAI” to “SyntAI” in the past few months and we are convinced we will be hearing more as the use of GenAI moves through the innovation lifecycle.
Prompt engineering: from science to art
Accompanying this monopoly of interest around GenAI and the large language models (LLM) that power it, is a strong shift towards prompt engineering—the art of writing instructions in plain language that instruct models to produce a desired outcome. Unlike previous AI technology which required expertise in data science, extracting value from LLMs requires writing skills and procedural thinking. This shift may have profound effects in the labor market to the point where fine writers may have a better chance to get the job than the code geeks that made up the traditional data science team.
Considering this shift towards “language as code” it was not surprising to see vendors offering tools aimed at prompt engineering, such as the ability to track multiple prompt versions, evaluate prompts, and monitor their performance.
Hardware also showed up
LLM craze reached the silicon layer. LLM acceleration chip integrators were present on the event floor with offerings promising performance improvements beyond what is possible with traditional graphical processing units (GPUs) and tensor processing units (TPUs). Will we be seeing more chipmakers trying to chip away (pun) at Nvidia’s dominance?
Buyers beware of the marketing fluff
The first this, the most robust that, the complete solution… extra, extra! Some companies on the expo floor are legitimate longtime innovators. These companies have been in the industry since before the GenAI revolution and have been deepening their expertise through years of R&D. Many other companies however, rushed to reprint their booths and marketing collateral in an attempt to communicate the perception that they have been at the forefront—and even perhaps the very pioneers—of the GenAI revolution. Slogans such as “the leaders in reducing LLM hallucination” seem a hallucination themselves—how can you become the leader in a space that is a few months old and crowded with vendors working hard to be a part of?
Buyers will be better off engaging with companies with a proven track record. It is all too easy to make the wrong turn and botch a POC. In an industry where snake oil salesmen abound, buyers will need to exercise skepticism before signing on the dotted line. Oh, and please, do not print collateral that can’t be recycled.
As you navigate the rapidly evolving landscape of AI, consider how these trends might influence your own priorities.
GenAI is forcing organizations to take another look at the type of value AI can create. This value is being measured with POVs and milestones, not feasibility. GenAI’s hunger for massive amounts of data pushed data governance to the top of the agenda, and it is making organizations think creatively on how to keep pace with innovation while being responsible and complying with mounting regulation.
GenAI is also crystalizing the mantra that more isn’t necessarily better. In a world of information overload, we need less generation, more understanding, and more precision. Organizations are still working to understand when to use and when to avoid GenAI, and it seems as though a shift towards accuracy is bound to take hold as the dust settles.
GenAI may be changing the skillsets that populate AI teams, prioritizing the hiring of writers over hardcore data scientists. This change is also giving way to a host of new tools for these “prompt engineers” to manage the instructions they give LLMs.
The speed with which AI technology evolved in 2023 hasn’t been seen since the internet became mainstream 25 years ago. This is putting in evidence some of the hollowness of the marketing claims made by companies trying to gain dominance in the emerging space. Organizations ought to be wary of bold assurances and ask how these companies are tackling the solving of problems they claim to have solved—this simple verification may help sort the real pros from those with lavish marketing budgets.
As we witness this unprecedented pace of AI evolution, one wonders what ground-breaking innovations the next AI Summit will reveal, shaping the future of technology and society.
Marcelo Bursztein is the Founder and CEO of NovaceneAI. Marcelo spent the last 20 years leading engineering and creative teams through countless implementations of web applications for clients of all sizes.