DareFightingICE Competition (formerly Fighting Game AI Competition)

Intelligent Computer Entertainment lab., Ritsumeikan University

Get started ------ First Step

You can start your first step by following this document. For Linux users, please see our Linux setup scripts. (available on Mar 9, 2017; not yet verified for Version 4.00 or later.)

--------------------------------------------------------------------------------------------------------------------

Preparation

Install JDK 8 (or later). For Eclipse, if you use java SE9 or later, please delete the module-info.java. (updated on January 7, 2023)

--------------------------------------------------------------------------------------------------------------------

AIToolKit installation

Run Eclipse, show Package Explorer on the left, and set workspace directory in any folder.
In Eclipse, click File->New->Java Project.
Enter your AI's project name and click Next->Finish.

Project name

Copy AIToolKit.jar to the workspace/your AI's project folder.
Right click your project in Package Explorer and click Refresh.
Right click your project again and select Properties.
Click Java Build Path and then Libraries.
Click Add JARs.
Select AIToolKit.jar under your AI's project and keep on clicking OK.
Right click the src folder under your project in Package Explorer and selct New->Class.
Set the class name (Please assign the name same as your AI's project name here).

Assign the name same as project's name

Next, install AIInterface to this class by clicking Add on the right side.

Add button

Then, type AIInterface, select the matched item, click OK and Finish.

Name of interface

Then, the setting is done.

--------------------------------------------------------------------------------------------------------------------

Role of each important method

 

 

-- void close()
This method finalizes AI. It runs only once at the end of each game.

-- void getInformation(FrameData fd)
This method gets information from the game status of each frame. Such information is stored in the parameter fd. If fd.getRemaningTime() returns a negative value, the current round has not started yet. For more information on the data type, please check Javadoc of AIToolkit.

-- Int initialize(GameData gd, boolean player)
This method initializes AI, and it will be executed only once in the beginning of a game. Its execution will load the data that cannot be changed and load the flag of the player's side ("boolean player", true for P1 or false for P2)
If there is anything that needs to be initialized, you had better do it in this method. It will return 0 when such initialization finishes correctly, otherwise the error code.

When you use frameData received from getInformation(), you must always check if the condition "!frameData.emptyFlag && frameData.getRemainingTime() > 0" holds; otherwise, NullPointerException will occur. You must also check the same condition when you use the CommandCenter class.

-- Key input()
The input method receives key inputted from AI. It is executed in each frame and returns a value in the Key type. Key has the following instance fields:

  • boolean U
  • boolean D
  • boolean L
  • boolean R
  • boolean A
  • boolean B
  • boolean C

The instance-field U, D, L, and R represent the direction key inputted by the player using the numeric keypad. They are also used in a combination with the instance-field A, B, and C for generating a skill. Their value is boolean, either true or false. When true, the corresponding key is being pressed.

-- void processing()
This method processes the data from AI. It is executed in each frame.

-- void roundEnd(int p1Hp, int p2Hp, int frames)
This method informs the result of each round. It is called when each round ends.

CHARACTER_ZEN, CHARACTER_GARNET, and CHARACTER_LUD

Note that ZEN is the only official character in the 2022 competition. Until 2021, ZEN, GARNET, and LUD were the only official characters in the competition, but only the motion data of the first (2021) and the first two (2020 and earlier) were the official ones; the other characters' motion data were non-official and were changed in the final competition.

After initialization, the process will continue as
1. getInformaiton
2. processing
3. input
for each frame.

The method close will be executed when the game is over.

Please check the details in Javadoc.

After you finish coding AI.
1. Right click the project and select Export.
2. Select the jar file in the folder Java and click Next.

Export

3. Then select the directory and file name of the export file as done in the following image. Please keep the jar file name same as the AI class.

Finish

4. After clicking Finish, AI making is done.

--------------------------------------------------------------------------------------------------------------------

Test the AI code

Follow the instructions below to install DareFightingICE.
Place the downloaded file DareFightingICE.zip into any folder you want.
Right click anywhere in the the empty space of Eclipse Package Explorer and select New->Java Project.

Enter the project name, e.g., DareFightingICE, and click Finish.

 

Move the following files available after uncompressing DareFightingICE.zip to the project folder under Eclipse workspace. And right click the project and select Refresh.

 

Click the triangle on the left of the project and you will see something like the following image.

 

Then do the setting of Java Build Path as follows: 
Right click the project and select Properties.

 

Select Java Build Path->Libraries->Add JARs.

 

Select the jar files that are underlined with red lines in the image below and add them to the build path.

 

After this, it will be like the following image.

Similary, select the OS folder you are using from /lib/native/ and add jar files in the folder.

After this, If your OS is Windows, it will be like the following image.

Next, do the setting of run configuration according to the following instructions.
Right click the project and select Run As->Run Configurations.

 

Select New_configuration under Java Application and click Search.

Select Main - (default package) and click OK.

 

After placing the jar file of each of the AIs you want to run in the data/ai/ directory (some AIs may require another folder under data/aiData/; for this please see the description of Switch sample AI below.), you can boot the game by clicking Run.
After booting, you can move the arrow cursor by pressing arrow keys.
When you press Z while the arrow cursor is at FIGHT, the game enters the fight-preparation screen.

The image below shows the fight-preparation screen.
In this screen, using arrow keys, you can choose input device (keyboard or AI) and a character for each player accordingly.
The name shown after CHARACTER 1 or 2 is the character for player 1 or 2. At present, such a character can be chosen among ZEN, GARNET, LUD, and KFM.

 

After the selection is done, to start the game move the cursor to PLAY and press Z.

 

After 3 rounds, the system will automatically output the score log of this match into log/point/. (visible to any txt editor) and the replay file for replay into log/replay/. If you want to review any of the newly conducted matches of the current game, exit, boot the new game, use arrow keys to select the corresponding replay file among REPLAY...., and finally press Z.

Some tips you might be interested to know. You can directly specify the program arguments in Eclipse as follows:
[-n number_of_games] [--c1 character_name_of_player_1] [--c2 character_name_of_player_2] [--a1 ai_name_of_player_1] [--a2 ai_name_of_player_2]
For example,
-n 10 --c1 ZEN --c2 ZEN --a1 AI1 --a2 AI2

You can also create a batch file in the folder where DareFightingICE has been installed, as shown below.

::
setlocal ENABLEDELAYEDEXPANSION
set PATH=%PATH%;./lib/native/windows
java -cp FightingICE.jar;./lib/lwjgl.jar;./lib/lwjgl_util.jar;./lib/gameLib.jar;./lib/fileLib.jar;./lib/jinput.jar;./lib/commons_csv.jar;./lib/javatuples-1.2.jar;./lib/py4j0.10.4.jar Main -n 10 --c1 ZEN --c2 ZEN --a1 AI_NAME_1 --a2 AI_NAME_2
endlocal
exit
::

Other useful options are as follows:
"-a" or "--all": Execute games for all AIs in \data\ai in a round-robin fashion
"-df": Output all information of P1 and P2 available in FrameData for each frame (for debugging your AI)
"-t": Start a round with Energy of both players set to 1000 (for testing the use of skills)
"-off": Disable logging to \log\point and \log\replay
"-del": Remove all old files in \log\point and \log\replay
"--limithp [P1HP] [P2HP]": Limit-HP Mode => Launch DareFightingICE with the HP mode used in the competition for both Standard and Speedrunning Leagues, where P1HP and P2HP are the initial HPs of P1 and P2, respectively.
"--grey-bg": Grey-background mode => This mode runs DareFightingICE with a grey background (without a background image), recommended when you use a visual-based AI, controlled by deep neural networks, etc. In the competition from 2017 to 2021, all games were run in this mode.
"--black-bg": Black-background mode => This mode runs DareFightingICE with a black background (without a background image),
"--fastmode": Fast mode => In this mode, the frame speed is not fixed to 60 FPS; the game proceeds to a next frame once the system gets inputs from both AIs. This mode might help you train your AI faster, but it will be not be used in the competition.
"--disable-window": No-window Mode => This mode runs DareFightingICE without showing the game screen. This mode might help you train your AI faster, but it will be not be used in the competition. Note that -disable-window doesn't actually disable the window, but rather just opens the window and doesn't display anything. In case you want to get the platform running on a Linux based server, please also check https://en.wikipedia.org/wiki/Xvfb. In addition, in this command, sounds will not be played through speakers, but audio data are still provided for your AI.
"--py4j": Python mode => If this argument is specified, a message "Waiting python to launch a game" will be shown in the game screen. You can then run a game or multiple games, each with different port number (see below), using a launching Python script. You can find sample launching Main~.py and sample Python AIs in the folder Python.
"--port [portNumber]": This is for setting the port number when you use Python. The default port number is 4242.
"--inverted-player [playerNumber]": Inverted-color mode => playerNumber is 1 and 2 for P1 and P2, respectively; if playerNumber is a number besides 1 or 2 (e.g., 0), the original character colors are used. This mode internally uses -- the colors shown on the game screen are not affected -- the inverted colors for a specified character and is recommended when you use getDisplayByteBufferAsBytes. This mode enables both characters to be distinguishable by their color differences even though they are the same character type, which should be helpful when you use a visual-based AI, controlled by deep neural networks, etc. In the competitions from 2017 to 2021, all games were run in "--inverted-player 1".
"--mute": Mute mode => In this mode, BGM and sound effects are muted. (added on June 20, 2017)
"--json": JSON mode => In this mode, game logs are output in JSON format. (added on June 20, 2017)
"--err-log": Err-log mode => This mode outputs the system's errors and the AI's logs to text files. (added on June 20, 2017)
"--slow": In this mode, a slow motion effect is shown at the round end.(added on March 19, 2020
"-f x": In this mode, the number of total frames is set to x for a round.(added on March 19, 2020)
"-r x": In this mode, the number of total rounds is set to x for a game.(added on March 19, 2020)

New useful options in the version 5.0 or later are as follows:
"--blind-player 1|2|0" limits AI to be able to access only to sound data for players 1|2|both. In the 2022 competition at the AI track, all games will be run in this mode. (added on February 26, 2022)

Here is a sample script to run DareFightingICE from Linux shell (not yet verified for Version 4.00 or later).

You can quit the current game and return to the game menu by pressing the Esc key. And it might be worth mentioning some manual controls (P1's perspective) as follows:
-> + -> is DASH
<- + <- is BACK_STEP
<- is STAND_GUARD

--------------------------------------------------------------------------------------------------------------------

All sample AIs below available before Jan 2018 do not operate on Version 4.00 or later. Please refer to the folders containing modified sample AIs for Version 4.00 or later and the 2016 and 2017 competition entries modified for Version 4.00 or later at the end of this page


The three AI samples below can also be helpful in test running. RandomAI performs motions or attacks randomly. CopyAI performs a motion or an attack previously conducted by its opponent AI. Switch switches between Random and Copy based on its performance in the previous game, whose information is stored in data/aiData/Switch/signal.txt. Please create the folder Switch below aiData and place signal.txt therein. Switch is also a good example on how to use File IO as described in The rules-Competition.

------ Random action sample AI (Not workable for Versions 4.00 or later.) ------ |||||||||||||| ------ Copy action sample AI (Not workable for Versions 4.00 or later.) ------

------ Switch sample AI (Not workable for Versions 4.00 or later.) ------

--------------------------------------------------------------------------------------------------------------------

To understand the usage of MotionData, please see MotionDataSample AI, which gives a very simple sample on how to use the method CancelAbleFrame in the class MotionData. This AI does nothing but displays its opponent character's return value of CancelAbleFrame.

------ MotionData sample AI (Not workable for Versions 4.00 or later.) ------

--------------------------------------------------------------------------------------------------------------------

MizunoAI predicts the next action of its opponent AI, from the opponent's previous actions and relative positions between the two AIs, using k-nn, simulates all possible actions to encounter, and then selects and performs the most effective one. A technical paper on MizunoAI, a competition paper at CIG 2014, is available here. Note that MizunoAI was originally designed for use in the case where both sides use the character KFM.

------ Mizuno Sample AI (ZEN version) (Not workable for Versions 4.00 or later.) ------

--------------------------------------------------------------------------------------------------------------------

JerryMizunoAI (a.k.a. ChuMizunoAI) combines fuzzy control with kNN prediction and simulation (forward model) to tackle the problem of "cold start" in MizunoAI. Its paper at the 77th National Convention of IPSJ (2015) is available here. Please note that the maximum number of frames (int simulationLimit) that can be simulated by the method simulate in class Simulator (used in JerryMizunoAI) is 60.

------ JerryMizuno Sample AI (Not workable for Versions 4.00 or later.) ------

The presentation slides are available at the slideshare site below.
Applying fuzzy control in fighting game ai from ftgaic

--------------------------------------------------------------------------------------------------------------------

------ Simulator Package (Not workable for Versions 4.00 or later.) ------

This package is used in JerryMizunoAI. This package is based on the mechanism we use in the game for advancing the game states. However, this package was made available in 2015. We recommend you instead use the Simulator class provided in the latest version of DareFightingICE.

--------------------------------------------------------------------------------------------------------------------

------ MctsAi (Not workable for Versions 4.00 or later.) ------

This is our sample AI that we recommend you guys check. It implements Monte Carlo Tree Search. The source codes in the above zip file have comments in Japanese, but you can have their version in English here -> MctsAi.java and Node.java.

The presentation slides at GCCE 2016 are available at the slideshare site below.
MctsAi @ GCCE 2016 from ftgaic. Its paper is available here. We also recommend you check these slides MctsAi from ftgaic, and a more advanced paper, and its poster at ACE 2016.

--------------------------------------------------------------------------------------------------------------------

------ DisplayInfoAI (Not workable for Versions 4.00 or later.) ------(available on Feb 4, 2017)

This is another sample AI that we recommend you guys check. It implements a simple AI using visual information from the game screen, which is not delayed! In particular, this AI uses a method called getDisplayByteBufferAsBytes.For this method, we recommend you specify the arguments to 96, 64, and 1 (true), respectively, by which the response time to acquire this 96x64 grayscale image's byte information would be less than 4ms (confirmed on Windows).
Below is how to use this function in Java and Python.
//----------------------//
- In Java

@Override
public void getInformation(FrameData fd) {
FrameData frameData = fd;
// Obtain RGB data of the screen in the form of byte[]
byte[] buffer = fd.getDisplayByteBufferAsBytes(96, 64, true);
}
//----------------------//
- In Python

buffer = self.fd.getDisplayByteBufferAsBytes(96, 64, True)
//----------------------//

--------------------------------------------------------------------------------------------------------------------

------ LoadTorchWeightAI (Not workable for Versions 4.00 or later.) ------(available on Mar 9, 2017)

This is also another sample AI that we recommend you guys check. It implements a deep learning AI based on delayed game states. In particular, the weights of this AI were trained using Torch.

--------------------------------------------------------------------------------------------------------------------

------ BasicBot (Not workable for Versions 4.00 or later.) ------(available on Mar 30, 2017)

This is yet another sample AI in Python that we recommend you guys check. It implements a visual-based deep learning AI. This one is competition compatible. You can find another version but not competition compatible, released on Mar 22, 2017, here. Both AIs were provided to us with the courtesy of Cognition & Intelligence Lab at Dept. of Computer Engineering in Sejong University, Korea.

--------------------------------------------------------------------------------------------------------------------

------ AnalysisTool (Not workable for Versions 4.00 or later.) ------(updated on August 30, 2017)

This is a simple tool for analyzing replay files.The presentation slides are available at the slideshare site below.
https://www.slideshare.net/ftgaic/introduction-to-the-replay-file-analysis-tool from ftgaic

--------------------------------------------------------------------------------------------------------------------

------ Sample AIs for Version 4.00 or Later ------(available on March 6, 2018)

------ 2016 Entries Modified for Version 4.00 or Later ------(available on March 6, 2018)

------ 2017 Entries Modified for Version 4.00 or Later ------(available on March 6, 2018)

--------------------------------------------------------------------------------------------------------------------

------ MutliHead AI ------(available on January 9, 2019)

This is a deep-learning AI, including source code, presented at CIG 2018. For a related video clip and the paper, please check this page.

--------------------------------------------------------------------------------------------------------------------