fbpx
Follow our blog to learn more about The AI Conference 2024!

How Do We Make GenAI more than just hype?

The Generative AI (GenAI) revolution has officially started – we’ve moved beyond theoretical and its first halting applications into a new era. Big companies like Amazon and Google are embracing it, with Amazon incorporating GenAI features into their virtual assistant Alexa and YouTube giving creators GenAI tools. But some implementations feel more thoughtful than others. Witness the New York Times hiring for a generative AI editor.

Gartner noted this summer that GenAI is at the peak of its hype cycle. And although GenAI has enormous potential, like any AI, right now, it’s at high risk of dropping into the “trough of disillusionment”. How do we skip that lull period and go straight into the enlightenment and productivity that GenAI could provide? Is that even possible given how hyped GenAI is at the moment? 

The answer is yes. Here’s how we do it.

Start from good foundations

Approximately 80% of the time spent on any AI model is spent working with the data that powers it. And right now, only about 36% of ML models make it out of pilot stages – suggesting that datasets for training still need work. After all,  ChatGPT is not a good lawyer, and model hallucination is overall a real and pressing issue across the leading models. 

Other cases of GenAI being far more confident than it should be when it’s wrong may end up having more serious consequences than a reprimand from a judge, especially when GenAI is being used in drug discovery and design. Working with the data now to address these issues will be better in the long run. 

Right now, GenAI as a whole is moving fast and scraping from sources that are not happy that they’re being used as sources. The New York Times may be hiring for a GenAI editor, but they would rather that models don’t learn from their articles without paying a licensing fee first. In addition, a Pulitzer Prize-winning author is among a group of authors also suing, because models have been trained on books – and those books are likely pirated

These shifts mean that this kind of data is going to become harder to access – and likely expensive if it needs to be licensed – making the quality of datasets more important than ever, since it will need to be pulled from other sources. The better the data that’s being used, the more likely it is that a model will perform to expectations. 

One way to solve this would be for data brokering to become part of the AI supply chain as it is in internet advertising. Leveraging the data brokerage structure developed in adtech for GenAI scale data could see a fair rate being paid to authors, while also providing cleaner data for GenAI model developers. This would also incentivize the creation of new datasets for specific use cases that could be monetized across multiple models, making it profitable.

Gartner noted this summer that GenAI is at the peak of its hype cycle. And although GenAI has enormous potential, like any AI, right now, it’s at high risk of dropping into the “trough of disillusionment”. How do we skip that lull period and go straight into the enlightenment and productivity that GenAI could provide? Is that even possible given how hyped GenAI is at the moment? 

The answer is yes. Here’s how we do it.

Making better prompts

The other key to making GenAI more usable lies with better prompts.. There simply isn’t enough data in the world to guarantee 100% accuracy from a model, even before data sourcing issues come into play.

Although prompt engineering is now a full-time job, even non-experts using GenAI for the first time can improve their chances of getting workable results. As with most things, the more you put into it, the more you’ll get out of it. In the case of GenAI, this means adding more context to your prompt. What are you going to use the information for? Who will be reading it? 

For example, you can tell GenAI to write a poem about falling in love, and you’ll get something usable. Prompt GenAI to write a poem about falling in love for the first time, what form of meter to use (if any), whether or not it should be free verse, even what format to write it in (a limerick, a sonnet, a haiku), a specific type or era of poetry to emulate, and you’ll find that you’ll have something much closer to what you’re looking for. And, of course, if that love wanes, you could even prompt GenAI with your previously-generated poems to write a breakup ballad. 

This may be a more light-hearted example, but it’s clear that a lot of detail can go into even this kind of prompt. Think about how much detail could be put into a prompt to solve a particular self-driving problem or how to identify a particular weed for precision agriculture – and how even with as much detail you could provide, you may never get exactly what you’re looking for. GenAI simply isn’t great for all use cases. 

Recognize what GenAI is good for

Overusing GenAI will quickly drive it into the disillusionment phase. Resisting the urge to put it into everything will be key. However, GenAI does have use cases where it can be particularly effective. 

For example, synthetic data generation may be one of the best applications for GenAI. Many model failures come from edge cases that aren’t covered by the training data set. Synthetic data can help bridge gaps in representative data, for example, creating more data showing humans of different sizes, genders, skin colors, using mobility aids, and other differences. For autonomous driving in particular, this will be essential to safe operation. 

Coding can also take advantage of GenAI – after all, its training data is fed on already-extant code. Generating code can thus be sped up, allowing for faster iteration based on testing. In the same way, GenAI is also being leveraged in multiple stages of drug development, including creating candidates for testing. When drug development can be a long, expensive process and real patients are searching for treatment, the speed and savings GenAI can offer may end up having some of the biggest positive impacts. 

All of the use cases described above require specific datasets and specific kinds of annotations for a GenAI model to be effective. While GenAI can create new data to better train models, that data still needs to be checked by humans to ensure it is a good fit for training models. Yes, this kind of AI is currently on an accelerated timeline due to the levels of investment, but that doesn’t mean it’s suited for everything or that it can be used without safeguards or processes in place for validation. Take generating art for commercial use. On top of the ethical questions, it doesn’t have any copyright protections

For GenAI to be effective, it needs to be used thoughtfully and with caution.

Be ready for it to take time

The first GPT model debuted in 2018, just five short years ago. Let’s be clear: we are still in a very young field, and there is room for (measured) excitement. 

Recently, Sama surveyed ML engineers, and almost 74% said that GenAI for computer vision can live up to the hype. 61% of the overall pool, though, said that we’re not there yet. The technology still needs to mature, and although that could take years, it means that its promises and uses will be better realized. 

At the same time, though, ML engineers Sama surveyed generally felt as though their work is making a difference, at nearly 70%. It’s no wonder that that’s the case, when some of the most visible use cases for computer vision have altruistic intent. That ranges from Precision Agriculture use cases that could one day feed the world more sustainably and efficiently to automotive models that enable safer driving to even medical technology that improves the quality and speed of diagnoses. 

One of the key areas where this process will take time is in the ethics of generative AI development. The lawsuits are just one facet of this set of questions. More questions include: what is good data to use for training? How do we make sure it doesn’t violate people’s privacy? How do we avoid running afoul of current and future regulations? All of these will have to have satisfactory answers, and there’s no shortcut to creating these answers. 

Yes, GenAI is hyped up. And yes, it’s moving fast enough that disillusionment could come sooner rather than later – if the above steps are not taken. Taking the time to assess the data that’s going into training these models, making sure data is suited to the task we would like it to perform, and being willing to take this time may, perhaps paradoxically, result in even faster GenAI development and faster realizations of its promises.

Picture of Duncan Curtis, Sama

Duncan Curtis, Sama

Duncan Curtis is the SVP Product and Technology at Sama, a leader in de-risking ML models, delivering best-in-class data annotation solutions with enterprise-strength experience & expertise and its ethical AI approach. To this leadership role, he brings 4 years of Autonomous Vehicle experience as the Head of Product at Zoox (now part of Amazon) and VP of Product at Aptiv, and 4 years of AI experience as a product manager at Google where he delighted the +1B daily active users of the Play Store and Play Games. Prior to this, Duncan’s career was focused on mobile gaming, most notably working on the Fruit Ninja and Jetpack Joyride franchises.

Duncan studied Computer Software Engineering at Queensland University of Technology. He is excited to bring his love of technology and impact together at Sama.