It can be hard to fully understand the potential of generative AI tools like ChatGPT through all the noise and hype.
Fueled by endless media speculation that stokes fear of job loss and a world dominated by bots that can think independently, we are all primed for a vision of that future that we have been sold by Hollywood for years.
However, it can be hard to connect that level of worry with something like ChatGPT.
You type something in and it reacts, sometimes in curious ways that are a little uncomfortable but never enough to cause abject fear.
But a recent project from a team from AI chip manufacturer Nvidia can help us see the potential and risk offered by so-called “smart bots.”
The project seems – on the surface – to be innocent enough.
Called Project Voyager, the team from Nvidia took the underpinnings of ChatGPT 4.0 and applied the interaction and learning capabilities to an agent that was trained to play the game Minecraft.
Minecraft is notable because it is an open-format game that requires exploration, resource gathering, and learning by trial and error.
It is, therefore, a perfect training ground for a smart bot.
The team initially trained the bot, or agent, using the thousands of available fan videos on YouTube.
Once released to play the game, it outperformed all such similar tests, learning and improving with each decision and easily outpacing human players.
The team concluded the following in the final report: “Voyager serves as a starting point to develop powerful generalist agents without tuning the model parameters.”
In other words, this is a big step into the world where AI thinks and reacts to real-world input and learns and grows accordingly.