If you’re getting into reinforcement learning, one of the best tools you can use is OpenAI Gym. It gives you a collection of environments where your AI agents can learn by trying things out and learning from the results. These environments range from simple tasks like balancing a pole on a cart to more fun ones like playing Atari games such as Breakout, Pacman, and Seaquest.
But sometimes, the built-in environments just don’t fit what you’re trying to do. Maybe you have a unique idea or a specific task in mind that’s not already included. The good news is that OpenAI Gym makes it easy to create your own custom environment—and that’s exactly what we’ll be doing in this post.
We will build a simple environment where an agent controls a chopper (or helicopter) and has to fly it while dodging obstacles in the air. This is the second part of our OpenAI Gym series, so we’ll assume you’ve gone through Part 1. If not, you can check it out on our blog.
We first begin with installing some important dependencies.
We also start with the necessary imports.
The environment we’re building is kind of like a game, and it’s inspired by the classic Dino Run game—the one you see in Google Chrome when your internet goes down. In that game, there’s a little dinosaur that keeps running forward, and your job is to make it jump over cacti and avoid birds flying at it. The longer you survive and the more distance you cover, the higher your score—or in reinforcement learning terms, the more reward you get.
In our game, instead of a dinosaur, our agent will be a Chopper pilot.
Note that this is going to be just a proof of concept and not the most aesthetically pleasing game. However, this post will give you enough knowledge to improve on it!
The first consideration when designing an environment is to decide what sort of observation and action space we will use.
We begin by implementing the init function of our environment class, ChopperScape
. In the init function, we will define the observation and the action spaces. In addition to that, we will also implement a few other attributes:
Canvas
: This represents our observation image.x_min
, y_min
, x_max
, y_max
: This defines the legitimate area of our screen where various elements, such as the Chopper and birds, can be placed. Other areas are reserved for displaying information, such as fuel left, rewards, and padding.elements
: This stores the active elements on the screen at any given time (e.g., chopper, bird, etc.).max_fuel
: Maximum fuel that the chopper can hold.Once we have determined the action space and the observation space, we need to finalize what would be the elements of our environment. In our game, we have three distinct elements: the Chopper, Flying Birds, and and Floating Fuel Stations. We will be implementing all of these as separate classes that inherit from a common base class called Point
.
The Point class defines any arbitrary point on our observation image. We define this class with the following attributes:
(x,y)
: Position of the point on the image.(x_min, x_max, y_min, y_max)
: Permissible coordinates for the point. If we try to set the position of the point outside these limits, the position values are clamped to these limits.name
: Name of the point.We define the following member functions for this class.
get_position
: Get the coordinates of the point.set_position
: Set the coordinates of the point to a certain value.move
: Move the points by a certain value.Now we define the classes Chopper
, Bird
and Fuel
. These classes are derived from the Point
class, and introduce a set of new attributes:
icon
: Icon of the point that will display on the observation image when the game is rendered.(icon_w, icon_h)
: Dimensions of the icon.Recall from Part 1 that any gym Env class has two important functions:
In this section, we will implement our environment’s reset
and step
functions, along with many other helper functions. We begin with the reset
function.
When we reset our environment, we need to reset all the state-based variables. These include fuel consumed, episodic return, and the elements inside the environment.
In our case, when we reset our environment, we have nothing but the Chopper in its initial state. We initialize our chopper randomly in an area in the top left of our image. This area is 5-10 percent of the image width and 15-20 percent of the image height.
We also define a helper function called draw_elements_on_canvas
that basically places the icons of each of the game’s elements at their respective positions in the observation image. If the position is beyond the permissible range, then the icons are placed on the range boundaries. We also print important information, such as the remaining fuel.
We finally return to the canvas on which the elements have been placed as the observation.
Before we proceed further, let us now see what our initial observation looks like.
Since our observation is the same as the gameplay screen of the game, our render function shall return our observation, too. We build functionality for two modes. One human
, which would render the game in a pop-up window, while rgb_array
returns it as a pixel array.
Now that we have the reset
function out of the way, we begin work on implementing the step
function, which will contain the code to transition our environment from one state to the next, given an action. In many ways, this section is the proverbial meat of our environment, and this is where most of the planning goes.
We first need to enlist things that need to happen in one transition step of the environment. This can be broken down into two parts:
So, let’s first focus on (1). We provide actions to the game that will control what our chopper does. We basically have five actions: move right, left, down, up, or do nothing, denoted by 0, 1, 2, 3, and 4, respectively.
We define a member function called get_action_meanings()
that will tell us what integer each action is mapped to for our reference.
We also validate whether the action being passed is valid by checking whether it’s present in the action space. If not, we raise an assertion.
Once that is done, we accordingly change the position of the chopper using the move
function we defined earlier. Each action results in movement by five coordinates in the respective directions.
Now that we have taken care of applying the action to the chopper, we focus on the other elements of the environment:
To implement the features outlined above, we need to implement a helper function that helps us determine whether two Point
objects (such as a Chopper/Bird or Chopper/Fuel Tank) have collided. How do we define a collision? We say that two points have collided when the distance between the coordinates of their centers is less than half of the sum of their dimensions. We call this function has_collided
.
Apart from this, we have to do some bookkeeping. The reward for each step is 1; therefore, the episodic return counter is updated by one every episode. If there is a collision, the reward is -10, and the episode terminates. The fuel counter is reduced by one at every step.
Finally, we implement our step
function. We have written extensive comments to guide you through the process.
This concludes the code for our environment. Now execute some steps in the environment using an agent that takes random actions!
That’s a wrap for this part, folks! I hope this tutorial gave you a clear idea of what goes into building a custom environment with OpenAI Gym—from the design decisions to the little details that make your game-like setup fun and challenging. Now that you’ve got the basics down, feel free to get creative and build your own environment from scratch—or improve the one we just made. Here are a few ideas to level it up:
Add some logic for what happens when a fuel tank and a bird collide—maybe a big explosion? And if you want to run your training faster or scale your experiments, consider spinning up a GPU Droplet on DigitalOcean. It’s a great way to offload the heavy lifting and train your agents more efficiently. Until next time—happy coding!
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!