Move options include; move closer, move away, fireball, megapunch, hurricane, and megafireball.
Megafireball, Megafireball, Megafireball, Megafireball, Megafireball, Megafireball…
New flash: Fast twitch games go to players with the fastest twitch.
Yeah, it’d be more interesting to see this done with, for instance, an RTS. Something where smarter decisions can beat out faster gameplay some percentage of the time. Obviously high APM is important in an RTS, but in this Street Fighter example, I’m pretty sure a 5 year old who only knows how to Hadouken spam would beat any of these LLMs from what we’re seeing here; it’s not so much about how good their decision-making is, but just about which ones execute the most moves that have a chance to connect.
LLMs don’t make decisions or understand things at all, they just regurgitate text in a human like manner.
I say this as someone who sees a lot of potential in the technology, though, but like this, or like most people are claiming we can use them.
What does a Large Language Model have to do with Street Fighter anyway? Random button presses might even score better.
As someone who sucks at fighting games, no, not really. :D
Last but not least, the question arises whether this is a useful benchmark for LLMs, or just an interesting distraction. More complex games could provide more rewarding insights, but results would probably be more difficult to interpret.
I’d love to see LLM’s rated by the time it takes them to beat the ender dragon
Could be a fun category extension. LLM Dragon% RSG: Using a fixed system such as AWS g5.xlarge for example (for fairness of frame rate), players are allowed to use LLM of their choice, using a consistent screen parser to generate a string describing the screen state to be filled as part of their LLM prompt, that’d navigate through the game from start to finish.