Generalized Entropy and Solution Information for Measuring Puzzle Difficulty by Shen and Sturtevant
Puzzle entropy uses information entropy with respect to player knowledge to measure the difficulty of a puzzle. The paper generalizes and refines this idea.
Generalized Entropy and Solution Information for Measuring Puzzle Difficulty by Shen and Sturtevant
Puzzle entropy uses information entropy with respect to player knowledge to measure the difficulty of a puzzle. The paper generalizes and refines this idea.
Generalized Entropy and Solution Information for Measuring Puzzle Difficulty by Shen and Sturtevant
Puzzle entropy uses information entropy with respect to player knowledge to measure the difficulty of a puzzle. The paper generalizes and refines this idea.
Generalized Entropy and Solution Information for Measuring Puzzle Difficulty by Shen and Sturtevant
Puzzle entropy uses information entropy with respect to player knowledge to measure the difficulty of a puzzle. The paper generalizes and refines this idea.
There and Back Again: Extracting Formal Domains for Controllable Neurosymbolic Story Authoring by Kelly et al.
Language models create fluent narratives, while planning-based story generators offer control. This paper explores how to combine the best of both!
There and Back Again: Extracting Formal Domains for Controllable Neurosymbolic Story Authoring by Kelly et al.
Language models create fluent narratives, while planning-based story generators offer control. This paper explores how to combine the best of both!
There and Back Again: Extracting Formal Domains for Controllable Neurosymbolic Story Authoring by Kelly et al.
Language models create fluent narratives, while planning-based story generators offer control. This paper explores how to combine the best of both!
There and Back Again: Extracting Formal Domains for Controllable Neurosymbolic Story Authoring by Kelly et al.
Language models create fluent narratives, while planning-based story generators offer control. This paper explores how to combine the best of both!
Puck: A Slow and Personal Automated Game Designer by Michael Cook
The author outlines new goals for automated game design focused on users and communities. The result is Puck, an automated game design system with an exhaustive approach to content generation.
Puck: A Slow and Personal Automated Game Designer by Michael Cook
The author outlines new goals for automated game design focused on users and communities. The result is Puck, an automated game design system with an exhaustive approach to content generation.
Puck: A Slow and Personal Automated Game Designer by Michael Cook
The author outlines new goals for automated game design focused on users and communities. The result is Puck, an automated game design system with an exhaustive approach to content generation.
Puck: A Slow and Personal Automated Game Designer by Michael Cook
The author outlines new goals for automated game design focused on users and communities. The result is Puck, an automated game design system with an exhaustive approach to content generation.
MappyLand: Fast, Accurate Mapping for Console Games by Osborn et al.
Game maps are useful for humans, game-playing agents, and content generation. Mappyland uses a variety of heuristics and algorithms to automatically generate accurate maps from example play.
MappyLand: Fast, Accurate Mapping for Console Games by Osborn et al.
Game maps are useful for humans, game-playing agents, and content generation. Mappyland uses a variety of heuristics and algorithms to automatically generate accurate maps from example play.
MappyLand: Fast, Accurate Mapping for Console Games by Osborn et al.
Game maps are useful for humans, game-playing agents, and content generation. Mappyland uses a variety of heuristics and algorithms to automatically generate accurate maps from example play.
MappyLand: Fast, Accurate Mapping for Console Games by Osborn et al.
Game maps are useful for humans, game-playing agents, and content generation. Mappyland uses a variety of heuristics and algorithms to automatically generate accurate maps from example play.
🔗 sites.google.com/ualberta.ca/...
And make sure to register for AIIDE here:
🔗 sites.google.com/ualberta.ca/...
🔗 sites.google.com/ualberta.ca/...
And make sure to register for AIIDE here:
🔗 sites.google.com/ualberta.ca/...
“It’s Unwieldy and It Takes a Lot of Time” —Challenges and Opportunities for Creating Agents in Commercial Games by Jacob et al.
Game research should consider industry best practices; this paper interviews creators to identify challenges in making AI agents.
“It’s Unwieldy and It Takes a Lot of Time” —Challenges and Opportunities for Creating Agents in Commercial Games by Jacob et al.
Game research should consider industry best practices; this paper interviews creators to identify challenges in making AI agents.
“It’s Unwieldy and It Takes a Lot of Time” —Challenges and Opportunities for Creating Agents in Commercial Games by Jacob et al.
Game research should consider industry best practices; this paper interviews creators to identify challenges in making AI agents.
“It’s Unwieldy and It Takes a Lot of Time” —Challenges and Opportunities for Creating Agents in Commercial Games by Jacob et al.
Game research should consider industry best practices; this paper interviews creators to identify challenges in making AI agents.
Macro Action Selection with Deep Reinforcement Learning in StarCraft by Xu et al.
To reduce the gap between human players and StarCraft bots, Xu et al. propose using a Deep Reinforcement Learning framework to select macro actions instead of predefined rules.
Macro Action Selection with Deep Reinforcement Learning in StarCraft by Xu et al.
To reduce the gap between human players and StarCraft bots, Xu et al. propose using a Deep Reinforcement Learning framework to select macro actions instead of predefined rules.
Macro Action Selection with Deep Reinforcement Learning in StarCraft by Xu et al.
To reduce the gap between human players and StarCraft bots, Xu et al. propose using a Deep Reinforcement Learning framework to select macro actions instead of predefined rules.
Macro Action Selection with Deep Reinforcement Learning in StarCraft by Xu et al.
To reduce the gap between human players and StarCraft bots, Xu et al. propose using a Deep Reinforcement Learning framework to select macro actions instead of predefined rules.