DevelopingElk11 minutes ago
The start of the article is good, but it starts to sound like LLM staring at the "Why this maps to Genetic Algorithms?" section. Is that the case?
janalsncm2 hours ago
There’s a ton of crossover between your method and RL. I guess instead of directly training on episodes and updating model weights, you just store episodes in RAM and sample from the most promising ones. It could be a neat way of getting out of infamous RL cold start by getting some examples of rewards. Thanks for sharing.
Naulius1 hour ago
Thanks! You're right that there's a resemblance to RL. The original approach was proposed by Antithesis, and in Part 1 we map it more directly to a mutation-based Genetic Algorithm: stored paths are the population, the x-position scoring is the fitness function, and bit-flip input generation is the mutation operator. There's no recombination and no learned policy but just evolutionary selection pressure on input sequences.

Interesting point about the RL cold start, one could definitely use the paths discovered first through the evolutionary exploration to seed an RL agent's initial experience which could help skip the early random flailing phase.

The key difference from RL is the goal. We're not trying to learn an optimal policy for playing the game and instead we're trying to explore as much of the state space as possible to find bugs. In Part 2 we plug in a behavior model that validates correctness at every frame during exploration (velocity constraints, causal movement checks, collision invariants). The combination is where it gets interesting: autonomous exploration discovers the states, and the behavior model catches when the game violates its own rules. For testing, the main reason we even care about completing each level is that a completed path serves as the base for more extensive exploration at every point along it. If the exploration can't reach the end, by definition we miss a large part of the state space.

wa0082 hours ago
AI is much more powerful than human in the closed fields, like game and defense. AlphaGo proved that at first.
Naulius1 hour ago
Agree. However, the described technique isn't really AI, there's no neural network or training. It's GA-driven exploration for testing: mutate inputs, keep what gets you further down the state space, discard what doesn't. AlphaGo optimizes for winning; testing optimizes for coverage. That said, what does apply well to testing from the AI field is the exploration during the training phase, as well as the ability to beat the game, giving you paths to branch off from to explore the test space further.
Naulius2 hours ago
We built an autonomous testing example that plays Super Mario Bros. to explore how behavior models combine with autonomous testing. Instead of manually writing test cases, it systematically explores the game's massive state space while a behavior model validates correctness in real-time- write your validation once, use it with any testing driver. A fun way to learn how it all works and find bugs along the way. All code is open source: https://github.com/testflows/Examples/tree/v2.0/SuperMario.