prgrmc commited on
Commit
fdcbf65
·
0 Parent(s):

First commit project

Browse files
Files changed (13) hide show
  1. .gitignore +259 -0
  2. README.md +200 -0
  3. assets/ascii_art.py +3 -0
  4. game/__init__.py +0 -0
  5. game/combat.py +26 -0
  6. game/dungeon.py +80 -0
  7. game/items.py +11 -0
  8. game/npc.py +11 -0
  9. game/player.py +59 -0
  10. helper.py +1066 -0
  11. main.py +183 -0
  12. requirements.txt +72 -0
  13. shared_data/Ethoria.json +1 -0
.gitignore ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### Python ###
2
+ # Byte-compiled / optimized / DLL files
3
+ __pycache__/
4
+ *.py[cod]
5
+ *$py.class
6
+
7
+ # C extensions
8
+ *.so
9
+
10
+ # Gradio
11
+ .gradio/
12
+
13
+ # deployment
14
+ DeploymentGuide.md
15
+
16
+ # Distribution / packaging
17
+ .Python
18
+ build/
19
+ develop-eggs/
20
+ dist/
21
+ downloads/
22
+ eggs/
23
+ .eggs/
24
+ lib/
25
+ lib64/
26
+ parts/
27
+ sdist/
28
+ var/
29
+ wheels/
30
+ share/python-wheels/
31
+ *.egg-info/
32
+ .installed.cfg
33
+ *.egg
34
+ MANIFEST
35
+
36
+ # PyInstaller
37
+ # Usually these files are written by a python script from a template
38
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
39
+ *.manifest
40
+ *.spec
41
+
42
+ # Installer logs
43
+ pip-log.txt
44
+ pip-delete-this-directory.txt
45
+
46
+ # Unit test / coverage reports
47
+ htmlcov/
48
+ .tox/
49
+ .nox/
50
+ .coverage
51
+ .coverage.*
52
+ .cache
53
+ nosetests.xml
54
+ coverage.xml
55
+ *.cover
56
+ *.py,cover
57
+ .hypothesis/
58
+ .pytest_cache/
59
+ cover/
60
+
61
+ # Translations
62
+ *.mo
63
+ *.pot
64
+
65
+ # Django stuff:
66
+ *.log
67
+ local_settings.py
68
+ db.sqlite3
69
+ db.sqlite3-journal
70
+
71
+ # Flask stuff:
72
+ instance/
73
+ .webassets-cache
74
+
75
+ # Scrapy stuff:
76
+ .scrapy
77
+
78
+ # Sphinx documentation
79
+ docs/_build/
80
+
81
+ # PyBuilder
82
+ .pybuilder/
83
+ target/
84
+
85
+ # Jupyter Notebook
86
+ .ipynb_checkpoints
87
+
88
+ # IPython
89
+ profile_default/
90
+ ipython_config.py
91
+
92
+ # pyenv
93
+ # For a library or package, you might want to ignore these files since the code is
94
+ # intended to run in multiple environments; otherwise, check them in:
95
+ # .python-version
96
+
97
+ # pipenv
98
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
99
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
100
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
101
+ # install all needed dependencies.
102
+ #Pipfile.lock
103
+
104
+ # poetry
105
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
106
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
107
+ # commonly ignored for libraries.
108
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
109
+ #poetry.lock
110
+
111
+ # pdm
112
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
113
+ #pdm.lock
114
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
115
+ # in version control.
116
+ # https://pdm.fming.dev/#use-with-ide
117
+ .pdm.toml
118
+
119
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
120
+ __pypackages__/
121
+
122
+ # Celery stuff
123
+ celerybeat-schedule
124
+ celerybeat.pid
125
+
126
+ # SageMath parsed files
127
+ *.sage.py
128
+
129
+ # Environments
130
+ .env
131
+ .venv
132
+ env/
133
+ venv/
134
+ ENV/
135
+ env.bak/
136
+ venv.bak/
137
+ dung-env/
138
+
139
+ # Spyder project settings
140
+ .spyderproject
141
+ .spyproject
142
+
143
+ # Rope project settings
144
+ .ropeproject
145
+
146
+ # mkdocs documentation
147
+ /site
148
+
149
+ # mypy
150
+ .mypy_cache/
151
+ .dmypy.json
152
+ dmypy.json
153
+
154
+ # Pyre type checker
155
+ .pyre/
156
+
157
+ # pytype static type analyzer
158
+ .pytype/
159
+
160
+ # Cython debug symbols
161
+ cython_debug/
162
+
163
+ # PyCharm
164
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
165
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
166
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
167
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
168
+ #.idea/
169
+
170
+ ### Python Patch ###
171
+ # Poetry local configuration file - https://python-poetry.org/docs/configuration/#local-configuration
172
+ poetry.toml
173
+
174
+ # ruff
175
+ .ruff_cache/
176
+
177
+ # LSP config files
178
+ pyrightconfig.json
179
+
180
+ ### PythonVanilla ###
181
+ # Byte-compiled / optimized / DLL files
182
+
183
+ # C extensions
184
+
185
+ # Distribution / packaging
186
+
187
+ # Installer logs
188
+
189
+ # Unit test / coverage reports
190
+
191
+ # Translations
192
+
193
+ # pyenv
194
+ # For a library or package, you might want to ignore these files since the code is
195
+ # intended to run in multiple environments; otherwise, check them in:
196
+ # .python-version
197
+
198
+ # pipenv
199
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
200
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
201
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
202
+ # install all needed dependencies.
203
+
204
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow
205
+
206
+
207
+ ### venv ###
208
+ # Virtualenv
209
+ # http://iamzed.com/2009/05/07/a-primer-on-virtualenv/
210
+ [Bb]in
211
+ [Ii]nclude
212
+ [Ll]ib
213
+ [Ll]ib64
214
+ [Ll]ocal
215
+ [Ss]cripts
216
+ pyvenv.cfg
217
+ pip-selfcheck.json
218
+
219
+ ### Video ###
220
+ *.3g2
221
+ *.3gp
222
+ *.asf
223
+ *.asx
224
+ *.avi
225
+ *.flv
226
+ *.mkv
227
+ *.mov
228
+ *.mp4
229
+ *.mpg
230
+ *.ogv
231
+ *.rm
232
+ *.swf
233
+ *.vob
234
+ *.wmv
235
+ *.webm
236
+
237
+ ### VisualStudioCode ###
238
+ .vscode/*
239
+ !.vscode/settings.json
240
+ !.vscode/tasks.json
241
+ !.vscode/launch.json
242
+ !.vscode/extensions.json
243
+ !.vscode/*.code-snippets
244
+
245
+ # Local History for Visual Studio Code
246
+ .history/
247
+
248
+ # Built Visual Studio Code Extensions
249
+ *.vsix
250
+
251
+ ### VisualStudioCode Patch ###
252
+ # Ignore all local history of files
253
+ .history
254
+ .ionide
255
+
256
+ # End of https://www.toptal.com/developers/gitignore/api/visualstudiocode,python,pythonvanilla,venv,video
257
+
258
+ # Custom rules (everything added below won't be overriden by 'Generate .gitignore File' if you use 'Update' option)
259
+
README.md ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AI-Powered Dungeon Adventure Game
2
+
3
+ ## Table of Contents
4
+ - [Overview](#overview)
5
+ - [Technical Architecture](#technical-architecture)
6
+ - [Key Features](#key-features)
7
+ - [Game Mechanics](#game-mechanics)
8
+ - [AI/ML Implementation](#aiml-implementation)
9
+ - [Installation](#installation)
10
+ - [Usage](#usage)
11
+ - [Project Structure](#project-structure)
12
+ - [Technologies Used](#technologies-used)
13
+ - [Future Enhancements](#future-enhancements)
14
+
15
+ ## Overview
16
+ An advanced text-based adventure game powered by Large Language Models (LLMs) that demonstrates the practical application of AI/ML in interactive entertainment. The game features dynamic quest generation, intelligent NPC interactions, and content safety validation using state-of-the-art language models.
17
+
18
+ ## Technical Architecture
19
+ - **Core Engine**: Python-based game engine with modular architecture
20
+ - **AI Integration**: Hugging Face Transformers pipeline for text generation
21
+ - **UI Framework**: Gradio for interactive web interface
22
+ - **Safety Layer**: LLaMA Guard for content moderation
23
+ - **State Management**: Dynamic game state handling with quest progression
24
+ - **Memory Management**: Optimized for GPU utilization with 8-bit quantization
25
+
26
+ ## Key Features
27
+ 1. **Dynamic Quest System**
28
+ - Procedurally generated quests based on player progress
29
+ - Multi-chain quest progression
30
+ - Experience-based leveling system
31
+
32
+ 2. **Intelligent Response Generation**
33
+ - Context-aware narrative responses
34
+ - Dynamic world state adaptation
35
+ - Natural language understanding (NLP)
36
+
37
+ 3. **Advanced Safety System**
38
+ - Real-time content moderation
39
+ - Multi-category safety checks
40
+ - Cached response validation
41
+
42
+ 4. **Inventory Management**
43
+ - Dynamic item tracking
44
+ - Automated inventory updates
45
+ - Natural language parsing for item detection
46
+
47
+ ## Game Mechanics
48
+ - **Dungeon Generation:** Randomly generated dungeons with obstacles.
49
+ - **Player and NPCs:** Players can move, fight NPCs, and use items.
50
+ - **Combat System:** Turn-based combat with simple AI decision-making.
51
+ - **Inventory Management:** Collect and use items to aid in your adventure.
52
+ - **Quest System:** Complete quests to earn rewards and progress through the game.
53
+
54
+
55
+ ## AI/ML Implementation
56
+ 1. **Language Models**
57
+ - Primary: LLaMA-3.2-3B-Instruct
58
+ - Safety: LLaMA-Guard-3-1B
59
+ - Optimized with 8-bit quantization
60
+
61
+ 2. **Natural Language Processing**
62
+ - Context embedding
63
+ - Response generation
64
+ - Content safety validation
65
+
66
+ 3. **Memory Optimization**
67
+ - GPU memory management
68
+ - Response caching
69
+ - Efficient token handling
70
+
71
+ ## Installation
72
+ ```bash
73
+ # Clone repository
74
+ git clone https://github.com/prgrmcode/ai-dungeon-game.git
75
+ cd ai-dungeon-game
76
+
77
+ # Create virtual environment
78
+ python -m venv dungeon-env
79
+ source dungeon-env/Scripts/activate # Windows
80
+ source dungeon-env/bin/activate # Linux/Mac
81
+
82
+ # Install dependencies (`pip freeze > requirements.txt` to get requirements)
83
+ pip install -r requirements.txt
84
+ # Or install libraries directly:
85
+ pip install numpy matplotlib pygame
86
+ pip install python-dotenv
87
+ pip install transformers
88
+ pip install gradio
89
+ pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu118
90
+ pip install psutil
91
+ pip install 'accelerate>=0.26.0
92
+
93
+ # Create a .env file in the root directory of the project and add your environment variables:
94
+ HUGGINGFACE_API_KEY=your_api_key_here
95
+ ```
96
+
97
+ ## Usage
98
+ ```bash
99
+ # Start the game
100
+ python main.py
101
+
102
+ # Access via web browser
103
+ http://localhost:7860
104
+ # or:
105
+ http://127.0.0.1:7860
106
+ ```
107
+
108
+ ## Project Structure
109
+ ```
110
+ ai_dungeon_game/
111
+ ├── assets/
112
+ │ └── ascii_art.py
113
+ ├── game/
114
+ │ ├── combat.py
115
+ │ ├── dungeon.py
116
+ │ ├── items.py
117
+ │ ├── npc.py
118
+ │ └── player.py
119
+ ├── shared_data/
120
+ │ └── Ethoria.json
121
+ ├── helper.py
122
+ └── main.py
123
+ ```
124
+
125
+ ## Technologies Used
126
+ - **Python 3.10+**
127
+ - **PyTorch**: Deep learning framework
128
+ - **Transformers**: Hugging Face's transformer models
129
+ - **Gradio**: Web interface framework
130
+ - **CUDA**: GPU acceleration
131
+ - **JSON**: Data storage
132
+ - **Logging**: Advanced error tracking
133
+
134
+ ## Skills Demonstrated
135
+ 1. **AI/ML Engineering**
136
+ - Large Language Model implementation
137
+ - Model optimization
138
+ - Prompt engineering
139
+ - Content safety systems
140
+
141
+ 2. **Software Engineering**
142
+ - Clean architecture
143
+ - Object-oriented design
144
+ - Error handling
145
+ - Memory optimization
146
+
147
+ 3. **Data Science**
148
+ - Natural language processing
149
+ - State management
150
+ - Data validation
151
+ - Pattern recognition
152
+
153
+ 4. **System Design**
154
+ - Modular architecture
155
+ - Scalable systems
156
+ - Memory management
157
+ - Performance optimization
158
+
159
+ ## Future Enhancements
160
+ 1. **Advanced AI Features**
161
+ - Multi-modal content generation
162
+ - Improved context understanding
163
+ - Dynamic difficulty adjustment
164
+
165
+ 2. **Technical Improvements**
166
+ - Distributed computing support
167
+ - Advanced caching mechanisms
168
+ - Real-time model updating
169
+
170
+ 3. **Gameplay Features**
171
+ - Multiplayer support
172
+ - Advanced combat system
173
+ - Dynamic world generation
174
+
175
+ 4. **Visual Enhancements**
176
+ - **Graphical User Interface (GUI):** Implement a GUI using Pygame to provide a more interactive and visually appealing experience.
177
+
178
+ - **2D/3D Graphics:** Use libraries like Pygame or Pyglet for 2D graphics.
179
+
180
+ - **Animations:** Add animations for player and NPC movements, combat actions, and other in-game events.
181
+
182
+ - **Visual Effects:** Implement visual effects such as particle systems for magic spells, explosions, and other dynamic events.
183
+
184
+ - **Map Visualization:** Create a visual representation of the dungeon map that updates as the player explores.
185
+
186
+
187
+ ## Requirements
188
+ - Python 3.10+
189
+ - CUDA-capable GPU (recommended)
190
+ - 8GB+ RAM
191
+ - Hugging Face API key
192
+
193
+
194
+
195
+
196
+ This project showcases practical implementation of AI/ML in interactive entertainment,
197
+ demonstrating skills in LLM implementation, system design, and performance optimization.
198
+
199
+
200
+
assets/ascii_art.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # Add ASCII art for visual representation of the dungeon and characters.
2
+ def display_dungeon(dungeon, player_position=None, level=0):
3
+ dungeon.display(level, player_position)
game/__init__.py ADDED
File without changes
game/combat.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Create a combat system where the player and NPCs can engage in battles.
2
+ def combat(player, npc):
3
+ while player.health > 0 and npc.health > 0:
4
+ player_action = input("Choose action (attack/flee): ")
5
+ if player_action == "attack":
6
+ npc.health -= 10
7
+ print(f"You attacked {npc.name}. {npc.name} health: {npc.health}")
8
+ elif player_action == "flee":
9
+ print("You fled the battle.")
10
+ break
11
+ if npc.health > 0:
12
+ npc_action = npc.decide_action(player)
13
+ if npc_action == "attack":
14
+ player.health -= 10
15
+ print(f"{npc.name} attacked you. Your health: {player.health}")
16
+ elif npc_action == "flee":
17
+ print(f"{npc.name} fled the battle.")
18
+ break
19
+ adjust_difficulty(player, npc)
20
+
21
+
22
+ def adjust_difficulty(player, npc):
23
+ if player.health < 30:
24
+ npc.health += 10 # Increase NPC health to make it more challenging
25
+ elif player.health > 70:
26
+ npc.health -= 10 # Decrease NPC health to make it easier
game/dungeon.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ from helper import run_action
3
+
4
+
5
+ class Dungeon:
6
+ def __init__(self, width, height):
7
+ self.width = width
8
+ self.height = height
9
+ self.grid = self.generate_dungeon()
10
+ self.items = []
11
+ self.npcs = []
12
+ self.story = ""
13
+ self.current_room = "entrance"
14
+
15
+ def generate_dungeon(self):
16
+ # Enhanced procedural generation
17
+ grid = [["." for _ in range(self.width)] for _ in range(self.height)]
18
+
19
+ # Create rooms
20
+ rooms = []
21
+ for _ in range(3):
22
+ room_w = random.randint(3, 5)
23
+ room_h = random.randint(3, 5)
24
+ x = random.randint(1, self.width - room_w - 1)
25
+ y = random.randint(1, self.height - room_h - 1)
26
+ rooms.append((x, y, room_w, room_h))
27
+
28
+ # Carve out room
29
+ for i in range(y, y + room_h):
30
+ for j in range(x, x + room_w):
31
+ grid[i][j] = "."
32
+
33
+ # Connect rooms with corridors
34
+ for i in range(len(rooms) - 1):
35
+ x1, y1 = rooms[i][0] + rooms[i][2] // 2, rooms[i][1] + rooms[i][3] // 2
36
+ x2, y2 = (
37
+ rooms[i + 1][0] + rooms[i + 1][2] // 2,
38
+ rooms[i + 1][1] + rooms[i + 1][3] // 2,
39
+ )
40
+
41
+ # Horizontal corridor
42
+ for x in range(min(x1, x2), max(x1, x2) + 1):
43
+ grid[y1][x] = "."
44
+
45
+ # Vertical corridor
46
+ for y in range(min(y1, y2), max(y1, y2) + 1):
47
+ grid[y2][x2] = "."
48
+
49
+ return grid
50
+
51
+ def display(self, level=0, player_pos=None):
52
+ """Return ASCII representation of dungeon"""
53
+ grid = self.grid.copy()
54
+
55
+ # Mark player position
56
+ if player_pos:
57
+ x, y = player_pos
58
+ if 0 <= x < self.width and 0 <= y < self.height:
59
+ grid[y][x] = "@"
60
+
61
+ # Mark NPCs
62
+ for npc in self.npcs:
63
+ x, y = npc.position
64
+ if 0 <= x < self.width and 0 <= y < self.height:
65
+ grid[y][x] = "N"
66
+
67
+ # Mark items
68
+ for item in self.items:
69
+ x, y = item.position
70
+ if 0 <= x < self.width and 0 <= y < self.height:
71
+ grid[y][x] = "i"
72
+
73
+ # Convert to string
74
+ return "\n".join("".join(row) for row in grid)
75
+
76
+ def get_room_description(self, room_name, game_state):
77
+ """Get AI-generated room description based on game state"""
78
+ prompt = f"Describe the {room_name} of the dungeon in the kingdom of {game_state['kingdom']}"
79
+ # Use helper.run_action() to get AI description
80
+ return run_action(prompt, [], game_state)
game/items.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Implement an inventory system for the player to collect and use items.
2
+ class Item:
3
+ def __init__(self, name, effect):
4
+ self.name = name
5
+ self.effect = effect
6
+
7
+
8
+ def use_item(player, item):
9
+ if item.effect == "heal":
10
+ player.health += 20
11
+ print(f"{player.name} used {item.name}. Health is now {player.health}")
game/npc.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Create NPCs with simple AI behaviors.
2
+ class NPC:
3
+ def __init__(self, name):
4
+ self.name = name
5
+ self.health = 100
6
+
7
+ def decide_action(self, player):
8
+ if self.health < 30:
9
+ return "flee"
10
+ else:
11
+ return "attack"
game/player.py ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from helper import run_action
2
+ from typing import Dict, List
3
+
4
+
5
+ # Implement basic movement logic for the player.
6
+ class Player:
7
+ def __init__(self, name):
8
+ self.name = name
9
+ self.health = 100
10
+ self.max_health = 100
11
+ self.level = 1
12
+ self.exp = 0
13
+ self.exp_to_level = 100
14
+ self.position = [0, 0]
15
+ self.inventory = []
16
+ self.current_quest = None
17
+ self.completed_quests = []
18
+ self.skills = {"combat": 1, "stealth": 1, "magic": 1}
19
+
20
+ def gain_exp(self, amount: int) -> bool:
21
+ """Award experience and handle leveling"""
22
+ self.exp += amount
23
+ if self.exp >= self.exp_to_level:
24
+ self.level_up()
25
+ return True
26
+ return False
27
+
28
+ def level_up(self):
29
+ """Handle level up effects"""
30
+ self.level += 1
31
+ self.exp -= self.exp_to_level
32
+ self.exp_to_level = int(self.exp_to_level * 1.5)
33
+ self.max_health += 10
34
+ self.health = self.max_health
35
+
36
+ def add_quest(self, quest: Dict):
37
+ """Add a new quest to track"""
38
+ self.current_quest = quest
39
+
40
+ def complete_quest(self, quest: Dict):
41
+ """Complete a quest and gain rewards"""
42
+ if quest in self.completed_quests:
43
+ return False
44
+
45
+ self.completed_quests.append(quest)
46
+ self.gain_exp(quest.get("exp_reward", 50))
47
+ return True
48
+
49
+ def interact(self, npc, game_state):
50
+ """Get AI-generated dialogue with NPCs"""
51
+ prompt = f"You talk to {npc.name}. What do they say?"
52
+ dialogue = run_action(prompt, [], game_state)
53
+ return dialogue
54
+
55
+ def examine(self, item, game_state):
56
+ """Get AI-generated item description"""
57
+ prompt = f"You examine the {item.name} closely. What do you discover?"
58
+ description = run_action(prompt, [], game_state)
59
+ return description
helper.py ADDED
@@ -0,0 +1,1066 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from dotenv import load_dotenv, find_dotenv
3
+ import json
4
+ import gradio as gr
5
+ import torch # first import torch then transformers
6
+
7
+ from transformers import pipeline
8
+ from transformers import AutoTokenizer, AutoModelForCausalLM
9
+ import logging
10
+ import psutil
11
+ from typing import Dict, Any, Optional, Tuple
12
+
13
+ # Add model caching and optimization
14
+ from functools import lru_cache
15
+ import torch.nn as nn
16
+
17
+ # Configure logging
18
+ logging.basicConfig(level=logging.INFO)
19
+ logger = logging.getLogger(__name__)
20
+
21
+
22
+ def get_available_memory():
23
+ """Get available GPU and system memory"""
24
+ gpu_memory = None
25
+ if torch.cuda.is_available():
26
+ gpu_memory = torch.cuda.get_device_properties(0).total_memory
27
+ system_memory = psutil.virtual_memory().available
28
+ return gpu_memory, system_memory
29
+
30
+
31
+ def load_env():
32
+ _ = load_dotenv(find_dotenv())
33
+
34
+
35
+ def get_huggingface_api_key():
36
+ load_env()
37
+ huggingface_api_key = os.getenv("HUGGINGFACE_API_KEY")
38
+ if not huggingface_api_key:
39
+ logging.error("HUGGINGFACE_API_KEY not found in environment variables")
40
+ raise ValueError("HUGGINGFACE_API_KEY not found in environment variables")
41
+ return huggingface_api_key
42
+
43
+
44
+ # Model configuration
45
+ MODEL_CONFIG = {
46
+ "main_model": {
47
+ "name": "meta-llama/Llama-3.2-3B-Instruct",
48
+ "dtype": torch.bfloat16,
49
+ "max_length": 512,
50
+ "device": "cuda" if torch.cuda.is_available() else "cpu",
51
+ },
52
+ "safety_model": {
53
+ "name": "meta-llama/Llama-Guard-3-1B",
54
+ "dtype": torch.bfloat16,
55
+ "max_length": 256,
56
+ "device": "cuda" if torch.cuda.is_available() else "cpu",
57
+ },
58
+ }
59
+
60
+
61
+ def initialize_model_pipeline(model_name, force_cpu=False):
62
+ """Initialize pipeline with memory management"""
63
+ try:
64
+ if force_cpu:
65
+ device = -1
66
+ else:
67
+ device = MODEL_CONFIG["main_model"]["device"]
68
+
69
+ # Use 8-bit quantization for memory efficiency
70
+ model = AutoModelForCausalLM.from_pretrained(
71
+ model_name,
72
+ load_in_8bit=True,
73
+ torch_dtype=MODEL_CONFIG["main_model"]["dtype"],
74
+ use_cache=True,
75
+ device_map="auto",
76
+ low_cpu_mem_usage=True,
77
+ )
78
+
79
+ model.config.use_cache = True
80
+
81
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
82
+
83
+ # Initialize pipeline
84
+ logger.info(f"Initializing pipeline with device: {device}")
85
+ generator = pipeline(
86
+ "text-generation",
87
+ model=model,
88
+ tokenizer=tokenizer,
89
+ # device=device,
90
+ # temperature=0.7,
91
+ model_kwargs={"low_cpu_mem_usage": True},
92
+ )
93
+
94
+ logger.info("Model Pipeline initialized successfully")
95
+ return generator, tokenizer
96
+
97
+ except ImportError as e:
98
+ logger.error(f"Missing required package: {str(e)}")
99
+ raise
100
+ except Exception as e:
101
+ logger.error(f"Failed to initialize pipeline: {str(e)}")
102
+ raise
103
+
104
+
105
+ # Initialize model pipeline
106
+ try:
107
+ # Use a smaller model for testing
108
+ # model_name = "meta-llama/Meta-Llama-3-8B-Instruct"
109
+ # model_name = "google/gemma-2-2b" # Start with a smaller model
110
+ # model_name = "microsoft/phi-2"
111
+ # model_name = "meta-llama/Llama-3.2-1B-Instruct"
112
+ # model_name = "meta-llama/Llama-3.2-3B-Instruct"
113
+
114
+ model_name = MODEL_CONFIG["main_model"]["name"]
115
+
116
+ api_key = get_huggingface_api_key()
117
+
118
+ # Initialize the pipeline with memory management
119
+ generator, tokenizer = initialize_model_pipeline(model_name)
120
+
121
+ except Exception as e:
122
+ logger.error(f"Failed to initialize model: {str(e)}")
123
+ # Fallback to CPU if GPU initialization fails
124
+ try:
125
+ logger.info("Attempting CPU fallback...")
126
+ generator, tokenizer = initialize_model_pipeline(model_name, force_cpu=True)
127
+ except Exception as e:
128
+ logger.error(f"CPU fallback failed: {str(e)}")
129
+ raise
130
+
131
+
132
+ def load_world(filename):
133
+ with open(filename, "r") as f:
134
+ return json.load(f)
135
+
136
+
137
+ # Define system_prompt and model
138
+ system_prompt = """You are an AI Game Master. Write ONE response describing what the player sees/experiences.
139
+ CRITICAL Rules:
140
+ - Write EXACTLY 3 sentences maximum
141
+ - Use daily English language
142
+ - Start with "You see", "You hear", or "You feel"
143
+ - Don't use 'Elara' or 'she/he', only use 'you'
144
+ - Use only second person ("you")
145
+ - Never include dialogue after the response
146
+ - Never continue with additional actions or responses
147
+ - Never add follow-up questions or choices
148
+ - Never include 'User:' or 'Assistant:' in response
149
+ - Never include any note or these kinds of sentences: 'Note from the game master'
150
+ - Never use ellipsis (...)
151
+ - Never include 'What would you like to do?' or similar prompts
152
+ - Always finish with one real response
153
+ - End the response with a period"""
154
+
155
+
156
+ def get_game_state(inventory: Dict = None) -> Dict[str, Any]:
157
+ """Initialize game state with safe defaults and quest system"""
158
+ try:
159
+ # Load world data
160
+ world = load_world("shared_data/Ethoria.json")
161
+ character = world["kingdoms"]["Valdor"]["towns"]["Ravenhurst"]["npcs"][
162
+ "Elara Brightshield"
163
+ ]
164
+ print(f"character in get_game_state: {character}")
165
+
166
+ game_state = {
167
+ "name": world["name"],
168
+ "world": world["description"],
169
+ "kingdom": world["kingdoms"]["Valdor"]["description"],
170
+ "town_name": world["kingdoms"]["Valdor"]["towns"]["Ravenhurst"]["name"],
171
+ "town": world["kingdoms"]["Valdor"]["towns"]["Ravenhurst"]["description"],
172
+ "character_name": character["name"],
173
+ "character_description": character["description"],
174
+ "start": world["start"],
175
+ "inventory": inventory
176
+ or {
177
+ "cloth pants": 1,
178
+ "cloth shirt": 1,
179
+ "goggles": 1,
180
+ "leather bound journal": 1,
181
+ "gold": 5,
182
+ },
183
+ "player": None,
184
+ "dungeon": None,
185
+ "current_quest": None,
186
+ "completed_quests": [],
187
+ "exp": 0,
188
+ "level": 1,
189
+ "reputation": {"Valdor": 0, "Ravenhurst": 0},
190
+ }
191
+
192
+ # print(f"game_state in get_game_state: {game_state}")
193
+
194
+ # Extract required data with fallbacks
195
+ return game_state
196
+ except (FileNotFoundError, KeyError, json.JSONDecodeError) as e:
197
+ logger.error(f"Error loading world data: {e}")
198
+ # Provide default values if world loading fails
199
+ return {
200
+ "world": "Ethoria is a realm of seven kingdoms, each founded on distinct moral principles.",
201
+ "kingdom": "Valdor, the Kingdom of Courage",
202
+ "town": "Ravenhurst, a town of skilled hunters and trappers",
203
+ "character_name": "Elara Brightshield",
204
+ "character_description": "A sturdy warrior with shining silver armor",
205
+ "start": "Your journey begins in the mystical realm of Ethoria...",
206
+ "inventory": inventory
207
+ or {
208
+ "cloth pants": 1,
209
+ "cloth shirt": 1,
210
+ "goggles": 1,
211
+ "leather bound journal": 1,
212
+ "gold": 5,
213
+ },
214
+ "player": None,
215
+ "dungeon": None,
216
+ "current_quest": None,
217
+ "completed_quests": [],
218
+ "exp": 0,
219
+ "level": 1,
220
+ "reputation": {"Valdor": 0, "Ravenhurst": 0},
221
+ }
222
+
223
+
224
+ def generate_dynamic_quest(game_state: Dict) -> Dict:
225
+ """Generate varied quests based on progress and level"""
226
+ completed = len(game_state.get("completed_quests", []))
227
+ level = game_state.get("level", 1)
228
+
229
+ # Quest templates by type
230
+ quest_types = {
231
+ "combat": [
232
+ {
233
+ "title": "The Beast's Lair",
234
+ "description": "A fearsome {creature} has been terrorizing the outskirts of Ravenhurst.",
235
+ "objective": "Hunt down and defeat the {creature}.",
236
+ "creatures": [
237
+ "shadow wolf",
238
+ "frost bear",
239
+ "ancient wyrm",
240
+ "spectral tiger",
241
+ ],
242
+ },
243
+ ],
244
+ "exploration": [
245
+ {
246
+ "title": "Lost Secrets",
247
+ "description": "Rumors speak of an ancient {location} containing powerful artifacts.",
248
+ "objective": "Explore the {location} and uncover its secrets.",
249
+ "locations": [
250
+ "crypt",
251
+ "temple ruins",
252
+ "abandoned mine",
253
+ "forgotten library",
254
+ ],
255
+ },
256
+ ],
257
+ "mystery": [
258
+ {
259
+ "title": "Dark Omens",
260
+ "description": "The {sign} has appeared, marking the rise of an ancient power.",
261
+ "objective": "Investigate the meaning of the {sign}.",
262
+ "signs": [
263
+ "blood moon",
264
+ "mysterious runes",
265
+ "spectral lights",
266
+ "corrupted wildlife",
267
+ ],
268
+ },
269
+ ],
270
+ }
271
+
272
+ # Select quest type and template
273
+ quest_type = list(quest_types.keys())[completed % len(quest_types)]
274
+ template = quest_types[quest_type][0] # Could add more templates per type
275
+
276
+ # Fill in dynamic elements
277
+ if quest_type == "combat":
278
+ creature = template["creatures"][level % len(template["creatures"])]
279
+ title = template["title"]
280
+ description = template["description"].format(creature=creature)
281
+ objective = template["objective"].format(creature=creature)
282
+ elif quest_type == "exploration":
283
+ location = template["locations"][level % len(template["locations"])]
284
+ title = template["title"]
285
+ description = template["description"].format(location=location)
286
+ objective = template["objective"].format(location=location)
287
+ else: # mystery
288
+ sign = template["signs"][level % len(template["signs"])]
289
+ title = template["title"]
290
+ description = template["description"].format(sign=sign)
291
+ objective = template["objective"].format(sign=sign)
292
+
293
+ return {
294
+ "id": f"quest_{quest_type}_{completed}",
295
+ "title": title,
296
+ "description": f"{description} {objective}",
297
+ "exp_reward": 150 + (level * 50),
298
+ "status": "active",
299
+ "triggers": ["investigate", "explore", quest_type, "search"],
300
+ "completion_text": f"You've made progress in understanding the growing darkness.",
301
+ "next_quest_hint": "More mysteries await in the shadows of Ravenhurst.",
302
+ }
303
+
304
+
305
+ def generate_next_quest(game_state: Dict) -> Dict:
306
+ """Generate next quest based on progress"""
307
+ completed = len(game_state.get("completed_quests", []))
308
+ level = game_state.get("level", 1)
309
+
310
+ quest_chain = [
311
+ {
312
+ "id": "mist_investigation",
313
+ "title": "Investigate the Mist",
314
+ "description": "Strange mists have been gathering around Ravenhurst. Investigate their source.",
315
+ "exp_reward": 100,
316
+ "status": "active",
317
+ "triggers": ["mist", "investigate", "explore"],
318
+ "completion_text": "As you investigate the mist, you discover ancient runes etched into nearby stones.",
319
+ "next_quest_hint": "The runes seem to point to an old hunting trail.",
320
+ },
321
+ {
322
+ "id": "hunters_trail",
323
+ "title": "The Hunter's Trail",
324
+ "description": "Local hunters have discovered strange tracks in the forest. Follow them to their source.",
325
+ "exp_reward": 150,
326
+ "status": "active",
327
+ "triggers": ["tracks", "follow", "trail"],
328
+ "completion_text": "The tracks lead to an ancient well, where you hear strange whispers.",
329
+ "next_quest_hint": "The whispers seem to be coming from deep within the well.",
330
+ },
331
+ {
332
+ "id": "dark_whispers",
333
+ "title": "Whispers in the Dark",
334
+ "description": "Mysterious whispers echo from the old well. Investigate their source.",
335
+ "exp_reward": 200,
336
+ "status": "active",
337
+ "triggers": ["well", "whispers", "listen"],
338
+ "completion_text": "You discover an ancient seal at the bottom of the well.",
339
+ "next_quest_hint": "The seal bears markings of an ancient evil.",
340
+ },
341
+ ]
342
+
343
+ # Generate dynamic quests after initial chain
344
+ if completed >= len(quest_chain):
345
+ return generate_dynamic_quest(game_state)
346
+
347
+ # current_quest_index = min(completed, len(quest_chain) - 1)
348
+ # return quest_chain[current_quest_index]
349
+ return quest_chain[completed]
350
+
351
+
352
+ def check_quest_completion(message: str, game_state: Dict) -> Tuple[bool, str]:
353
+ """Check quest completion and handle progression"""
354
+ if not game_state.get("current_quest"):
355
+ return False, ""
356
+
357
+ quest = game_state["current_quest"]
358
+ triggers = quest.get("triggers", [])
359
+
360
+ if any(trigger in message.lower() for trigger in triggers):
361
+ # Award experience
362
+ exp_reward = quest.get("exp_reward", 100)
363
+ game_state["exp"] += exp_reward
364
+
365
+ # Update player level if needed
366
+ while game_state["exp"] >= 100 * game_state["level"]:
367
+ game_state["level"] += 1
368
+ game_state["player"].level = (
369
+ game_state["level"] if game_state.get("player") else game_state["level"]
370
+ )
371
+
372
+ level_up_text = (
373
+ f"\nLevel Up! You are now level {game_state['level']}!"
374
+ if game_state["exp"] >= 100 * (game_state["level"] - 1)
375
+ else ""
376
+ )
377
+
378
+ # Store completed quest
379
+ game_state["completed_quests"].append(quest)
380
+
381
+ # Generate next quest
382
+ next_quest = generate_next_quest(game_state)
383
+ game_state["current_quest"] = next_quest
384
+
385
+ # Update status display
386
+ if game_state.get("player"):
387
+ game_state["player"].exp = game_state["exp"]
388
+ game_state["player"].level = game_state["level"]
389
+
390
+ # Build completion message
391
+ completion_msg = f"""
392
+ Quest Complete: {quest['title']}! (+{exp_reward} exp){level_up_text}
393
+ {quest.get('completion_text', '')}
394
+
395
+ New Quest: {next_quest['title']}
396
+ {next_quest['description']}
397
+ {next_quest.get('next_quest_hint', '')}"""
398
+
399
+ return True, completion_msg
400
+
401
+ return False, ""
402
+
403
+
404
+ def parse_items_from_story(text: str) -> Dict[str, int]:
405
+ """Extract item changes from story text"""
406
+ items = {}
407
+
408
+ # Common item keywords and patterns
409
+ gold_pattern = r"(\d+)\s*gold"
410
+ items_pattern = (
411
+ r"(?:receive|find|given|hand|containing)\s+(?:a|an|the)?\s*(\d+)?\s*([\w\s]+)"
412
+ )
413
+
414
+ # Find gold amounts
415
+ gold_matches = re.findall(gold_pattern, text.lower())
416
+ if gold_matches:
417
+ items["gold"] = sum(int(x) for x in gold_matches)
418
+
419
+ # Find other items
420
+ item_matches = re.findall(items_pattern, text.lower())
421
+ for count, item in item_matches:
422
+ count = int(count) if count else 1
423
+ item = item.strip()
424
+ if item in items:
425
+ items[item] += count
426
+ else:
427
+ items[item] = count
428
+
429
+ return items
430
+
431
+
432
+ def update_game_inventory(game_state: Dict, story_text: str) -> str:
433
+ """Update inventory based on story and return update message"""
434
+ try:
435
+ items = parse_items_from_story(story_text)
436
+ update_msg = ""
437
+
438
+ for item, count in items.items():
439
+ if item in game_state["inventory"]:
440
+ game_state["inventory"][item] += count
441
+ else:
442
+ game_state["inventory"][item] = count
443
+ update_msg += f"\nReceived: {count} {item}"
444
+
445
+ return update_msg
446
+ except Exception as e:
447
+ logger.error(f"Error updating inventory: {e}")
448
+ return ""
449
+
450
+
451
+ def extract_response_after_action(full_text: str, action: str) -> str:
452
+ """Extract response text that comes after the user action line"""
453
+ try:
454
+ # Split into lines
455
+ lines = full_text.split("\n")
456
+
457
+ # Find index of line containing user action
458
+ action_line_index = -1
459
+ for i, line in enumerate(lines):
460
+ if f"user: {action}" in line:
461
+ action_line_index = i
462
+ break
463
+
464
+ if action_line_index >= 0:
465
+ # Get all lines after the action line
466
+ response_lines = lines[action_line_index + 1 :]
467
+ response = " ".join(line.strip() for line in response_lines if line.strip())
468
+
469
+ # Clean up any remaining markers
470
+ response = response.split("user:")[0].strip()
471
+ response = response.split("system:")[0].strip()
472
+
473
+ return response
474
+
475
+ return ""
476
+
477
+ except Exception as e:
478
+ logger.error(f"Error extracting response: {e}")
479
+ return ""
480
+
481
+
482
+ def run_action(message: str, history: list, game_state: Dict) -> str:
483
+ """Process game actions and generate responses with quest handling"""
484
+ try:
485
+ # Handle start game command
486
+ if message.lower() == "start game":
487
+ # # Generate initial quest
488
+ # initial_quest = {
489
+ # "title": "Investigate the Mist",
490
+ # "description": "Strange mists have been gathering around Ravenhurst. Investigate their source.",
491
+ # "exp_reward": 100,
492
+ # "status": "active",
493
+ # }
494
+ # game_state["current_quest"] = initial_quest
495
+ # Initialize first quest
496
+ initial_quest = generate_next_quest(game_state)
497
+ game_state["current_quest"] = initial_quest
498
+
499
+ start_response = f"""Welcome to {game_state['name']}. {game_state['world']}
500
+
501
+ {game_state['start']}
502
+
503
+ Currently in {game_state['town_name']}, in the kingdom of {game_state['kingdom']}.
504
+ {game_state['town']}
505
+
506
+
507
+ Current Quest: {initial_quest['title']}
508
+ {initial_quest['description']}
509
+
510
+ What would you like to do?"""
511
+ return start_response
512
+
513
+ # Verify game state
514
+ if not isinstance(game_state, dict):
515
+ logger.error(f"Invalid game state type: {type(game_state)}")
516
+ return "Error: Invalid game state"
517
+
518
+ # logger.info(f"Processing action with game state: {game_state}")
519
+ logger.info(f"Processing action with game state")
520
+
521
+ world_info = f"""World: {game_state['world']}
522
+ Kingdom: {game_state['kingdom']}
523
+ Town: {game_state['town']}
524
+ Character: {game_state['character_name']}
525
+ Current Quest: {game_state["current_quest"]['title']}
526
+ Quest Objective: {game_state["current_quest"]['description']}
527
+ Inventory: {json.dumps(game_state['inventory'])}"""
528
+
529
+ # # Enhanced system prompt for better response formatting
530
+ # enhanced_prompt = f"""{system_prompt}
531
+ # Additional Rules:
532
+ # - Always start responses with 'You ', 'You see' or 'You hear' or 'You feel'
533
+ # - Use ONLY second person perspective ('you', not 'Elara' or 'she/he')
534
+ # - Describe immediate surroundings and sensations
535
+ # - Keep responses focused on the player's direct experience"""
536
+
537
+ messages = [
538
+ {"role": "system", "content": system_prompt},
539
+ {"role": "user", "content": world_info},
540
+ ]
541
+
542
+ # Format chat history
543
+ if history:
544
+ for h in history:
545
+ if isinstance(h, tuple):
546
+ messages.append({"role": "assistant", "content": h[0]})
547
+ messages.append({"role": "user", "content": h[1]})
548
+
549
+ messages.append({"role": "user", "content": message})
550
+
551
+ # Convert messages to string format for pipeline
552
+ prompt = "\n".join([f"{msg['role']}: {msg['content']}" for msg in messages])
553
+
554
+ # Generate response
555
+ model_output = generator(
556
+ prompt,
557
+ max_new_tokens=len(tokenizer.encode(message))
558
+ + 120, # Set max_new_tokens based on input length
559
+ num_return_sequences=1,
560
+ # temperature=0.7, # More creative but still focused
561
+ repetition_penalty=1.2,
562
+ pad_token_id=tokenizer.eos_token_id,
563
+ )
564
+
565
+ # Extract and clean response
566
+ full_response = model_output[0]["generated_text"]
567
+ print(f"full_response in run_action: {full_response}")
568
+
569
+ response = extract_response_after_action(full_response, message)
570
+ print(f"response in run_action: {response}")
571
+
572
+ # Convert to second person
573
+ response = response.replace("Elara", "You")
574
+
575
+ # # Format response
576
+ # if not response.startswith("You"):
577
+ # response = "You see " + response
578
+
579
+ # Validate no cut-off sentences
580
+ if response.rstrip().endswith(("you also", "meanwhile", "suddenly", "...")):
581
+ response = response.rsplit(" ", 1)[0] # Remove last word
582
+
583
+ # Ensure proper formatting
584
+ response = response.rstrip("?").rstrip(".") + "."
585
+ response = response.replace("...", ".")
586
+
587
+ # Perform safety check before returning
588
+ safe = is_safe(response)
589
+ print(f"\nSafety Check Result: {'SAFE' if safe else 'UNSAFE'}")
590
+ logger.info(f"Safety check result: {'SAFE' if safe else 'UNSAFE'}")
591
+
592
+ if not safe:
593
+ logging.warning("Unsafe content detected - blocking response")
594
+ print("Unsafe content detected - Response blocked")
595
+ return "This response was blocked for safety reasons."
596
+
597
+ # # Add quest progress checks
598
+ # if game_state["current_quest"]:
599
+ # quest = game_state["current_quest"]
600
+ # # Check for quest completion keywords
601
+ # if any(
602
+ # word in message.lower() for word in ["investigate", "explore", "search"]
603
+ # ):
604
+ # if (
605
+ # "mist" in message.lower()
606
+ # and quest["title"] == "Investigate the Mist"
607
+ # ):
608
+ # game_state["player"].complete_quest(quest)
609
+ # response += "\n\nQuest Complete: Investigate the Mist! (+100 exp)"
610
+
611
+ if safe:
612
+ # Check for quest completion
613
+ quest_completed, quest_message = check_quest_completion(message, game_state)
614
+ if quest_completed:
615
+ response += quest_message
616
+
617
+ # Check for item updates
618
+ inventory_update = update_game_inventory(game_state, response)
619
+ if inventory_update:
620
+ response += inventory_update
621
+
622
+ # Validate response
623
+ return response if response else "You look around carefully."
624
+
625
+ except KeyError as e:
626
+ logger.error(f"Missing required game state key: {e}")
627
+ return "Error: Game state is missing required information"
628
+ except Exception as e:
629
+ logger.error(f"Error generating response: {e}")
630
+ return (
631
+ "I apologize, but I had trouble processing that command. Please try again."
632
+ )
633
+
634
+
635
+ def update_game_status(game_state: Dict) -> Tuple[str, str]:
636
+ """Generate updated status and quest display text"""
637
+ # Status text
638
+ status_text = (
639
+ f"Health: {game_state.get('player').health if game_state.get('player') else 100}/100\n"
640
+ f"Level: {game_state.get('level', 1)}\n"
641
+ f"Exp: {game_state.get('exp', 0)}/{100 * game_state.get('level', 1)}"
642
+ )
643
+
644
+ # Quest text
645
+ quest_text = "No active quest"
646
+ if game_state.get("current_quest"):
647
+ quest = game_state["current_quest"]
648
+ quest_text = f"{quest['title']}\n{quest['description']}"
649
+ if quest.get("next_quest_hint"):
650
+ quest_text += f"\n{quest['next_quest_hint']}"
651
+
652
+ return status_text, quest_text
653
+
654
+
655
+ def chat_response(message: str, chat_history: list, current_state: dict) -> tuple:
656
+ """Process chat input and return response with updates"""
657
+ try:
658
+ if not message.strip():
659
+ return chat_history, current_state, "", ""
660
+
661
+ # Get AI response
662
+ output = run_action(message, chat_history, current_state)
663
+
664
+ # Update chat history without status info
665
+ chat_history = chat_history or []
666
+ chat_history.append((message, output))
667
+
668
+ # # Create status text
669
+ # status_text = "Health: 100/100\nLevel: 1\nExp: 0/100"
670
+ # if current_state.get("player"):
671
+ # status_text = (
672
+ # f"Health: {current_state['player'].health}/{current_state['player'].max_health}\n"
673
+ # f"Level: {current_state['player'].level}\n"
674
+ # f"Exp: {current_state['player'].exp}/{current_state['player'].exp_to_level}"
675
+ # )
676
+
677
+ # quest_text = "No active quest"
678
+ # if current_state.get("current_quest"):
679
+ # quest = current_state["current_quest"]
680
+ # quest_text = f"{quest['title']}\n{quest['description']}"
681
+
682
+ # Update status displays
683
+ status_text, quest_text = update_game_status(current_state)
684
+
685
+ # Return tuple includes empty string to clear input
686
+ return chat_history, current_state, status_text, quest_text
687
+
688
+ except Exception as e:
689
+ logger.error(f"Error in chat response: {e}")
690
+ return chat_history, current_state, "", ""
691
+
692
+
693
+ def start_game(main_loop, game_state, share=False):
694
+ """Initialize and launch game interface"""
695
+ with gr.Blocks(theme=gr.themes.Soft()) as demo:
696
+ gr.Markdown("# AI Dungeon Adventure")
697
+
698
+ # Game state storage
699
+ state = gr.State(game_state)
700
+ history = gr.State([])
701
+
702
+ with gr.Row():
703
+ # Game display
704
+ with gr.Column(scale=3):
705
+ chatbot = gr.Chatbot(
706
+ height=550,
707
+ placeholder="Type 'start game' to begin",
708
+ )
709
+
710
+ # Input area with submit button
711
+ with gr.Row():
712
+ txt = gr.Textbox(
713
+ show_label=False,
714
+ placeholder="What do you want to do?",
715
+ container=False,
716
+ )
717
+ submit_btn = gr.Button("Submit", variant="primary")
718
+ clear = gr.ClearButton([txt, chatbot])
719
+
720
+ # Enhanced Status panel
721
+ with gr.Column(scale=1):
722
+ with gr.Group():
723
+ gr.Markdown("### Character Status")
724
+ status = gr.Textbox(
725
+ label="Status",
726
+ value="Health: 100/100\nLevel: 1\nExp: 0/100",
727
+ interactive=False,
728
+ )
729
+
730
+ quest_display = gr.Textbox(
731
+ label="Current Quest",
732
+ value="No active quest",
733
+ interactive=False,
734
+ )
735
+
736
+ inventory_data = [
737
+ [item, count]
738
+ for item, count in game_state.get("inventory", {}).items()
739
+ ]
740
+ inventory = gr.Dataframe(
741
+ value=inventory_data,
742
+ headers=["Item", "Quantity"],
743
+ label="Inventory",
744
+ interactive=False,
745
+ )
746
+
747
+ # Command suggestions
748
+ gr.Examples(
749
+ examples=[
750
+ "look around",
751
+ "continue the story",
752
+ "take sword",
753
+ "go to the forest",
754
+ ],
755
+ inputs=txt,
756
+ )
757
+
758
+ # def chat_response(
759
+ # message: str, chat_history: list, current_state: dict
760
+ # ) -> tuple:
761
+ # """Process chat input and return response with updates"""
762
+ # try:
763
+ # if not message.strip():
764
+ # return chat_history, current_state, "" # Only clear input
765
+
766
+ # # Get AI response
767
+ # output = run_action(message, chat_history, current_state)
768
+
769
+ # # Update chat history
770
+ # chat_history = chat_history or []
771
+ # chat_history.append((message, output))
772
+
773
+ # # Update status if player exists
774
+ # # Update displays
775
+ # status_text = (
776
+ # f"Health: {current_state['player'].health}/{current_state['player'].max_health}\n"
777
+ # f"Level: {current_state['player'].level}\n"
778
+ # f"Exp: {current_state['player'].exp}/{current_state['player'].exp_to_level}"
779
+ # )
780
+
781
+ # quest_text = "No active quest"
782
+ # if current_state["current_quest"]:
783
+ # quest = current_state["current_quest"]
784
+ # quest_text = f"{quest['title']}\n{quest['description']}"
785
+
786
+ # # Update inventory display
787
+ # inv_data = [
788
+ # [item, count]
789
+ # for item, count in current_state.get("inventory", {}).items()
790
+ # ]
791
+
792
+ # return (chat_history, current_state, status_text, quest_text)
793
+
794
+ # except Exception as e:
795
+ # logger.error(f"Error in chat response: {e}")
796
+ # return (
797
+ # chat_history + [(message, "Error processing command")],
798
+ # current_state,
799
+ # status.value,
800
+ # quest_display.value,
801
+ # )
802
+
803
+ # # Connect components
804
+ # submit_btn.click(
805
+ # chat_response,
806
+ # inputs=[txt, chatbot, state],
807
+ # # outputs=[chatbot, state, txt],
808
+ # outputs=[chatbot, state, status, quest_display],
809
+ # show_progress=True,
810
+ # )
811
+
812
+ # # txt.submit(
813
+ # # chat_response,
814
+ # # [txt, chatbot, state],
815
+ # # [chatbot, state, txt],
816
+ # # show_progress=True,
817
+ # # )
818
+
819
+ # txt.submit(
820
+ # chat_response,
821
+ # inputs=[txt, chatbot, state],
822
+ # outputs=[chatbot, state, status, quest_display], # Keep status separate
823
+ # show_progress=True,
824
+ # )
825
+
826
+ def submit_action(message, history, state):
827
+ # Process response
828
+ new_history, new_state, status_text, quest_text = chat_response(
829
+ message, history, state
830
+ )
831
+ # Clear input
832
+ return "", new_history, new_state, status_text, quest_text
833
+
834
+ submit_btn.click(
835
+ submit_action,
836
+ inputs=[txt, chatbot, state],
837
+ outputs=[txt, chatbot, state, status, quest_display],
838
+ )
839
+
840
+ txt.submit(
841
+ submit_action,
842
+ inputs=[txt, chatbot, state],
843
+ outputs=[txt, chatbot, state, status, quest_display],
844
+ )
845
+
846
+ demo.launch(share=share)
847
+
848
+
849
+ everyone_content_policy = {
850
+ "policy": """O1: Violence and Hate.
851
+ Should not
852
+ - Express hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics.
853
+ - Use slurs based on personal characteristics
854
+ - Write gruesome / graphic violence.
855
+ Can
856
+ - Write tame fantasy violence
857
+ O2: Sexual Content.
858
+ Should not
859
+ - Write any somewhat sexual content that mentions sexual functions or sexual body parts.
860
+ - Write any content that would be inappropriate for kids.
861
+ Can
862
+ - Write romantic content (including kissing, flirting etc...)
863
+ O3: Self-Harm.
864
+ Should not
865
+ - Encourage people to harm themselves.
866
+ - Romanticize or condone self-harm or suicide.
867
+ - Write story depicting suicide.
868
+ O4: Profanity.
869
+ Should not
870
+ - Write with any profane language that wouldn't be appropriate for kids.
871
+ Can
872
+ - Write with kid appropriate language
873
+ O5: Regulated or Controlled Substances.
874
+ Should not
875
+ - Write stories that depicts, glorifies or encourages drug use."""
876
+ }
877
+
878
+
879
+ def init_safety_model(model_name, force_cpu=False):
880
+ """Initialize safety checking model with optimized memory usage"""
881
+ try:
882
+ if force_cpu:
883
+ device = -1
884
+ else:
885
+ device = MODEL_CONFIG["safety_model"]["device"]
886
+
887
+ # model_id = "meta-llama/Llama-Guard-3-8B"
888
+ # model_id = "meta-llama/Llama-Guard-3-1B"
889
+
890
+ safety_model = AutoModelForCausalLM.from_pretrained(
891
+ model_name,
892
+ torch_dtype=MODEL_CONFIG["safety_model"]["dtype"],
893
+ use_cache=True,
894
+ device_map="auto",
895
+ )
896
+ safety_model.config.use_cache = True
897
+
898
+ safety_tokenizer = AutoTokenizer.from_pretrained(model_name)
899
+ # Set pad token explicitly
900
+ safety_tokenizer.pad_token = safety_tokenizer.eos_token
901
+
902
+ logger.info(f"Safety model initialized successfully on {device}")
903
+ return safety_model, safety_tokenizer
904
+
905
+ except Exception as e:
906
+ logger.error(f"Failed to initialize safety model: {e}")
907
+ raise
908
+
909
+
910
+ # Initialize safety model pipeline
911
+ try:
912
+ safety_model_name = MODEL_CONFIG["safety_model"]["name"]
913
+
914
+ api_key = get_huggingface_api_key()
915
+
916
+ # Initialize the pipeline with memory management
917
+ safety_model, safety_tokenizer = init_safety_model(safety_model_name)
918
+
919
+ except Exception as e:
920
+ logger.error(f"Failed to initialize model: {str(e)}")
921
+ # Fallback to CPU if GPU initialization fails
922
+ try:
923
+ logger.info("Attempting CPU fallback...")
924
+ safety_model, safety_tokenizer = init_safety_model(
925
+ safety_model_name, force_cpu=True
926
+ )
927
+ except Exception as e:
928
+ logger.error(f"CPU fallback failed: {str(e)}")
929
+ raise
930
+
931
+
932
+ @lru_cache(maxsize=1000)
933
+ def get_safety_response(prompt: str) -> str:
934
+ """Cached safety check responses"""
935
+ with torch.no_grad():
936
+ inputs = safety_tokenizer(
937
+ prompt,
938
+ return_tensors="pt",
939
+ padding=True,
940
+ truncation=True,
941
+ max_length=MODEL_CONFIG["safety_model"]["max_length"],
942
+ ).to(safety_model.device)
943
+
944
+ output = safety_model.generate(
945
+ **inputs,
946
+ max_new_tokens=5,
947
+ temperature=0.1, # Lower temperature for more consistent output
948
+ pad_token_id=safety_tokenizer.pad_token_id,
949
+ eos_token_id=safety_tokenizer.eos_token_id,
950
+ )
951
+
952
+ return safety_tokenizer.decode(output[0], skip_special_tokens=True)
953
+
954
+
955
+ def is_safe(message: str) -> bool:
956
+ """Check if message content is safe"""
957
+ try:
958
+ logger.info("Performing safety check...")
959
+ # Build the prompt with embedded values
960
+ # prompt = f"""[INST] Task: Check if this content is safe according to our safety policy with the below categories.
961
+
962
+ # <BEGIN UNSAFE CONTENT CATEGORIES>
963
+ # {everyone_content_policy['policy']}
964
+ # <END UNSAFE CONTENT CATEGORIES>
965
+
966
+ # <CONTENT TO CHECK>
967
+ # {message}
968
+ # <END CONTENT TO CHECK>
969
+
970
+ # Provide your safety assessment for user in the above CONTENT TO CHECK section:
971
+ # - Reply ONLY with single word 'safe' or 'unsafe' [/INST]
972
+ # """
973
+
974
+ prompt = f"""[INST]Check if safe:
975
+ {message}
976
+ Reply 'safe' or 'unsafe'[/INST]
977
+ """
978
+
979
+ # Generate safety check on cuda
980
+
981
+ # with torch.no_grad():
982
+ # inputs = safety_tokenizer(
983
+ # prompt,
984
+ # return_tensors="pt",
985
+ # padding=True,
986
+ # truncation=True,
987
+ # )
988
+
989
+ # # Move inputs to correct device
990
+ # inputs = {k: v.to(device) for k, v in inputs.items()}
991
+
992
+ # output = safety_model.generate(
993
+ # **inputs,
994
+ # max_new_tokens=10,
995
+ # temperature=0.1, # Lower temperature for more consistent output
996
+ # pad_token_id=safety_tokenizer.pad_token_id, # Use configured pad token
997
+ # eos_token_id=safety_tokenizer.eos_token_id,
998
+ # do_sample=False,
999
+ # )
1000
+
1001
+ # result = safety_tokenizer.decode(output[0], skip_special_tokens=True)
1002
+ result = get_safety_response(prompt)
1003
+ print(f"Raw safety check result: {result}")
1004
+
1005
+ # # Extract response after prompt
1006
+ # if "[/INST]" in result:
1007
+ # result = result.split("[/INST]")[-1]
1008
+
1009
+ # # Clean response
1010
+ # result = result.lower().strip()
1011
+ # print(f"Cleaned safety check result: {result}")
1012
+ # words = [word for word in result.split() if word in ["safe", "unsafe"]]
1013
+
1014
+ # # Take first valid response word
1015
+ # is_safe = words[0] == "safe" if words else False
1016
+
1017
+ # print("Final Safety check result:", is_safe)
1018
+
1019
+ is_safe = "safe" in result.lower().split()
1020
+
1021
+ logger.info(
1022
+ f"Safety check completed - Result: {'SAFE' if is_safe else 'UNSAFE'}"
1023
+ )
1024
+ return is_safe
1025
+
1026
+ except Exception as e:
1027
+ logger.error(f"Safety check failed: {e}")
1028
+ return False
1029
+
1030
+
1031
+ def detect_inventory_changes(game_state, output):
1032
+ inventory = game_state["inventory"]
1033
+ messages = [
1034
+ {"role": "system", "content": system_prompt},
1035
+ {"role": "user", "content": f"Current Inventory: {str(inventory)}"},
1036
+ {"role": "user", "content": f"Recent Story: {output}"},
1037
+ {"role": "user", "content": "Inventory Updates"},
1038
+ ]
1039
+
1040
+ input_text = "\n".join([f"{msg['role']}: {msg['content']}" for msg in messages])
1041
+ model_output = generator(input_text, num_return_sequences=1, temperature=0.0)
1042
+ response = model_output[0]["generated_text"]
1043
+ result = json.loads(response)
1044
+ return result["itemUpdates"]
1045
+
1046
+
1047
+ def update_inventory(inventory, item_updates):
1048
+ update_msg = ""
1049
+ for update in item_updates:
1050
+ name = update["name"]
1051
+ change_amount = update["change_amount"]
1052
+ if change_amount > 0:
1053
+ if name not in inventory:
1054
+ inventory[name] = change_amount
1055
+ else:
1056
+ inventory[name] += change_amount
1057
+ update_msg += f"\nInventory: {name} +{change_amount}"
1058
+ elif name in inventory and change_amount < 0:
1059
+ inventory[name] += change_amount
1060
+ update_msg += f"\nInventory: {name} {change_amount}"
1061
+ if name in inventory and inventory[name] < 0:
1062
+ del inventory[name]
1063
+ return update_msg
1064
+
1065
+
1066
+ logging.info("Finished helper function")
main.py ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from game.dungeon import Dungeon
2
+ from game.player import Player
3
+ from game.npc import NPC
4
+ from game.combat import combat, adjust_difficulty
5
+ from game.items import Item, use_item
6
+ from assets.ascii_art import display_dungeon
7
+ from helper import (
8
+ get_game_state,
9
+ run_action,
10
+ start_game,
11
+ is_safe,
12
+ detect_inventory_changes,
13
+ update_inventory,
14
+ )
15
+
16
+ import pdb
17
+ import logging
18
+
19
+ logging.basicConfig(level=logging.INFO)
20
+ logger = logging.getLogger(__name__)
21
+
22
+
23
+ def main_loop(message, history, game_state):
24
+ """Main game loop that processes commands and returns responses"""
25
+
26
+ try:
27
+ print(f"\nProcessing command: {message}") # Debug print
28
+
29
+ # Get AI response
30
+ output = run_action(message, history, game_state)
31
+ logger.info(f"Generated response: {output}")
32
+ print(f"\nGenerated output: {output}") # Debug print
33
+
34
+ # Safety check
35
+ safe = is_safe(output)
36
+ print(f"\nSafety Check Result: {'SAFE' if safe else 'UNSAFE'}")
37
+ logger.info(f"Safety check result: {'SAFE' if safe else 'UNSAFE'}")
38
+
39
+ if not safe:
40
+ logging.warning("Unsafe output detected")
41
+ logger.warning("Unsafe content detected - blocking response")
42
+ print("Unsafe content detected - Response blocked")
43
+ return "This response was blocked for safety reasons.", history
44
+
45
+ # Update history with safe response
46
+ if not history:
47
+ history = []
48
+ history.append((message, output))
49
+
50
+ return output, history
51
+
52
+ except Exception as e:
53
+ logger.error(f"Error in main_loop: {str(e)}")
54
+ return "Error processing command", history
55
+
56
+
57
+ def main():
58
+ """Initialize game and start interface"""
59
+ try:
60
+ logger.info("Starting main function")
61
+ print("\nInitializing game...")
62
+ game_state = get_game_state(
63
+ inventory={
64
+ "cloth pants": 1,
65
+ "cloth shirt": 1,
66
+ "goggles": 1,
67
+ "leather bound journal": 1,
68
+ "gold": 5,
69
+ }
70
+ )
71
+ logger.debug(f"Initialized game state: {game_state}")
72
+
73
+ # Create and add game objects
74
+ dungeon = Dungeon(10, 10)
75
+ player = Player("Hero")
76
+
77
+ # Update game state with objects
78
+ game_state["dungeon"] = dungeon
79
+ game_state["player"] = player
80
+
81
+ # logger.info(f"Game state in main(): {game_state}")
82
+
83
+ # Start game interface
84
+ print("Starting game interface...")
85
+ start_game(main_loop, game_state, True)
86
+
87
+ except Exception as e:
88
+ logger.error(f"Error in main: {str(e)}")
89
+ raise
90
+
91
+
92
+ if __name__ == "__main__":
93
+ main()
94
+
95
+
96
+ # def main_loop(message, history):
97
+ # logging.info(f"main_loop called with message: {message}")
98
+
99
+ # # Initialize history if None
100
+ # history = history or []
101
+
102
+ # # Get AI response
103
+ # output = run_action(message, history, game_state)
104
+
105
+ # # Safety check
106
+ # safe = is_safe(output)
107
+ # if not safe:
108
+ # logging.error("Unsafe output detected")
109
+ # return "Invalid Output"
110
+
111
+ # # Format the output nicely
112
+ # output_lines = [output]
113
+
114
+ # # Handle movement and exploration
115
+ # if message.lower().startswith(("go", "move", "walk")):
116
+ # direction = message.split()[1]
117
+ # game_state["player"].move(direction, game_state["dungeon"])
118
+ # room_desc = game_state["dungeon"].get_room_description(
119
+ # game_state["dungeon"].current_room, game_state
120
+ # )
121
+ # output_lines.append(f"\n{room_desc}")
122
+
123
+ # # Handle NPC interactions
124
+ # elif message.lower().startswith("talk"):
125
+ # npc_name = message.split()[2]
126
+ # for npc in game_state["dungeon"].npcs:
127
+ # if npc.name.lower() == npc_name.lower():
128
+ # dialogue = game_state["player"].interact(npc, game_state)
129
+ # output_lines += f"\n{dialogue}"
130
+
131
+ # # Handle item interactions and inventory
132
+ # elif message.lower().startswith(("take", "pick up")):
133
+ # item_name = " ".join(message.split()[1:])
134
+ # for item in game_state["dungeon"].items:
135
+ # if item.name.lower() == item_name.lower():
136
+ # game_state["player"].inventory.append(item)
137
+ # game_state["dungeon"].items.remove(item)
138
+ # output += f"\nYou picked up {item.name}"
139
+ # item_desc = game_state["player"].examine(item, game_state)
140
+ # output_lines += f"\n{item_desc}"
141
+
142
+ # # Format final output
143
+ # final_output = "\n".join(output_lines)
144
+ # history.append((message, final_output))
145
+ # logging.info(f"main_loop output: {final_output}")
146
+
147
+ # return final_output, history
148
+
149
+
150
+ # def main():
151
+ # logging.info("Starting main function")
152
+
153
+ # try:
154
+ # # Initialize game state with error handling
155
+ # global game_state
156
+ # game_state = get_game_state(
157
+ # inventory={
158
+ # "cloth pants": 1,
159
+ # "cloth shirt": 1,
160
+ # "goggles": 1,
161
+ # "leather bound journal": 1,
162
+ # "gold": 5,
163
+ # }
164
+ # )
165
+
166
+ # # Verify game state initialization
167
+ # if not game_state:
168
+ # raise ValueError("Failed to initialize game state")
169
+
170
+ # # Create dungeon and populate with AI-generated content
171
+ # dungeon = Dungeon(10, 10)
172
+ # game_state["dungeon"] = dungeon
173
+
174
+ # # Create player and add to game state
175
+ # player = Player("Hero")
176
+ # game_state["player"] = player
177
+
178
+ # # Start game interface
179
+ # start_game(main_loop, True)
180
+
181
+ # except Exception as e:
182
+ # logging.error(f"Error in main: {str(e)}")
183
+ # raise
requirements.txt ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ accelerate==1.1.1
2
+ aiofiles==23.2.1
3
+ annotated-types==0.7.0
4
+ anyio==4.6.2.post1
5
+ bitsandbytes==0.45.0
6
+ certifi==2024.8.30
7
+ charset-normalizer==3.4.0
8
+ click==8.1.7
9
+ colorama==0.4.6
10
+ contourpy==1.3.1
11
+ cycler==0.12.1
12
+ exceptiongroup==1.2.2
13
+ fastapi==0.115.5
14
+ ffmpy==0.4.0
15
+ filelock==3.16.1
16
+ fonttools==4.55.0
17
+ fsspec==2024.10.0
18
+ gradio==5.7.1
19
+ gradio_client==1.5.0
20
+ h11==0.14.0
21
+ httpcore==1.0.7
22
+ httpx==0.28.0
23
+ huggingface-hub==0.26.3
24
+ idna==3.10
25
+ Jinja2==3.1.4
26
+ kiwisolver==1.4.7
27
+ markdown-it-py==3.0.0
28
+ MarkupSafe==2.1.5
29
+ matplotlib==3.9.3
30
+ mdurl==0.1.2
31
+ mpmath==1.3.0
32
+ networkx==3.2.1
33
+ numpy==2.1.3
34
+ orjson==3.10.12
35
+ packaging==24.2
36
+ pandas==2.2.3
37
+ pillow==11.0.0
38
+ psutil==6.1.0
39
+ pydantic==2.10.2
40
+ pydantic_core==2.27.1
41
+ pydub==0.25.1
42
+ Pygments==2.18.0
43
+ pyparsing==3.2.0
44
+ python-dateutil==2.9.0.post0
45
+ python-dotenv==1.0.1
46
+ python-multipart==0.0.12
47
+ pytz==2024.2
48
+ PyYAML==6.0.2
49
+ regex==2024.11.6
50
+ requests==2.32.3
51
+ rich==13.9.4
52
+ ruff==0.8.1
53
+ safehttpx==0.1.1
54
+ safetensors==0.4.5
55
+ semantic-version==2.10.0
56
+ shellingham==1.5.4
57
+ six==1.16.0
58
+ sniffio==1.3.1
59
+ starlette==0.41.3
60
+ sympy==1.13.1
61
+ tokenizers==0.20.3
62
+ tomlkit==0.12.0
63
+ torch==2.5.1+cu118
64
+ torchvision==0.20.1+cu118
65
+ tqdm==4.67.1
66
+ transformers==4.46.3
67
+ typer==0.14.0
68
+ typing_extensions==4.12.2
69
+ tzdata==2024.2
70
+ urllib3==2.2.3
71
+ uvicorn==0.32.1
72
+ websockets==12.0
shared_data/Ethoria.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"name": "Ethoria", "description": "Ethoria is a realm of seven kingdoms, each founded on a distinct moral principle: Courage, Wisdom, Compassion, Justice, Honesty, Perseverance, and Forgiveness. These virtues are woven into the fabric of society, guiding the actions of rulers and citizens alike. The kingdoms' architecture, laws, and even magical energies are attuned to their respective principles, creating a world where morality is a tangible force that shapes the destiny of its inhabitants.", "kingdoms": {"Valdor": {"name": "Valdor", "description": "Valdor, the Kingdom of Courage, is a realm of fearless warriors and daring explorers. Led by the fearless Queen Lyra, Valdor's people are bred for battle, with a history of conquering untamed lands and taming the unknown. Their capital, the fortress city of Krael, is a marvel of defensive architecture, with walls that shimmer with a magical aura of bravery.", "world": "Ethoria", "towns": {"Brindlemark": {"name": "Brindlemark", "description": "Located in the rolling hills of Valdor's countryside, Brindlemark is a charming town of skilled horse breeders and trainers. The town's central square features a grand statue of a proud stallion, and its stables are renowned for producing the finest warhorses in the kingdom. Brindlemark's history is marked by its role in supplying the royal cavalry, earning it the nickname \"The Cradle of Valdor's Valor.\"", "world": "Ethoria", "kingdom": "Valdor", "npcs": {"Eira Shadowglow": {"name": "Eira Shadowglow", "description": "Eira is a mysterious, raven-haired huntress from Brindlemark. She wears leather armor adorned with small, intricate bones and carries an enchanted longbow that glows with a faint, Courage-infused light. By day, she tracks and trains the finest warhorses for the royal cavalry, but by night, she ventures into the surrounding woods, seeking to avenge her family's brutal slaughter at the hands of dark forces. Her deepest pain is the loss of her loved ones, and her greatest desire is to protect the innocent and bring justice to those who have escaped it.", "world": "Ethoria", "kingdom": "Valdor", "town": "Brindlemark"}, "Arin the Unyielding": {"name": "Arin the Unyielding", "description": "Arin is a towering, blond-haired blacksmith from Krael, the capital of Valdor. His imposing physique and fearless demeanor make him a natural fit for the kingdom's elite guard. He wears a suit of plate armor emblazoned with the emblem of Courage and wields a mighty warhammer that can shatter even the strongest defenses. Despite his unwavering bravery, Arin struggles with the weight of his own expectations, feeling that he can never live up to the legend of his heroic ancestors. His deepest desire is to forge a new legacy, one that will make his family proud.", "world": "Ethoria", "kingdom": "Valdor", "town": "Brindlemark"}, "Lysander Moonwhisper": {"name": "Lysander Moonwhisper", "description": "Lysander is a soft-spoken, silver-haired bard from the kingdom's traveling performer's guild. He wears elegant, embroidered robes and carries a lute that shines with a subtle, Courage-infused glow. With his enchanting music and captivating tales, he inspires the people of Valdor to embody the virtue of Courage. However, Lysander's charming facade hides a deep sense of insecurity, as he feels that his art is not truly valued in a kingdom that prizes martial prowess above all else. His greatest desire is to prove that even the gentlest of arts can be a powerful force for good.", "world": "Ethoria", "kingdom": "Valdor", "town": "Brindlemark"}}}, "Ravenhurst": {"name": "Ravenhurst", "description": "Nestled in the dark, misty forests of Valdor's north, Ravenhurst is a foreboding town of skilled hunters and trappers. The town's wooden buildings seem to blend seamlessly into the surrounding trees, and its people are masters of stealth and tracking. Ravenhurst's history is shrouded in mystery, with whispers of ancient pacts with the forest's mysterious creatures.", "world": "Ethoria", "kingdom": "Valdor", "npcs": {"Kael Darkhunter": {"name": "Kael Darkhunter", "description": "Kael is a tall, lean figure with piercing green eyes and jet-black hair, often dressed in dark leather armor and carrying an enchanted longbow. As a skilled hunter and tracker from Ravenhurst, Kael has earned a reputation for being able to find and eliminate any target. However, beneath their stoic exterior lies a deep-seated fear of failure and a burning desire to prove themselves as more than just a tool for the kingdom's bidding.", "world": "Ethoria", "kingdom": "Valdor", "town": "Ravenhurst"}, "Elara Brightshield": {"name": "Elara Brightshield", "description": "Elara is a sturdy, blonde-haired warrior with a warm smile and a suit of shining silver armor adorned with the emblem of Valdor. As a member of the Queen's personal guard, Elara is sworn to defend the kingdom and its people with her life. Yet, she secretly struggles with the weight of her responsibilities and the pressure to live up to her family's legacy of bravery, all while longing for a sense of freedom and autonomy.", "world": "Ethoria", "kingdom": "Valdor", "town": "Ravenhurst"}, "Cormac Stonefist": {"name": "Cormac Stonefist", "description": "Cormac is a gruff, rugged man with a thick beard and a missing eye, often clad in worn, earth-toned robes. As a master stonemason and architect, Cormac has spent years honing his craft in the service of Valdor, designing and building structures that embody the kingdom's values of courage and strength. However, Cormac's rough exterior hides a deep sense of loss and regret, stemming from a tragic accident that cost him his eye and his sense of purpose.", "world": "Ethoria", "kingdom": "Valdor", "town": "Ravenhurst"}}}, "Emberhaven": {"name": "Emberhaven", "description": "Perched on the edge of a volcanic region, Emberhaven is a town of fiery passion and innovation. Its people are master craftsmen, harnessing the region's geothermal energy to forge powerful magical artifacts. The town's central forge is a marvel of engineering, with flames that burn bright blue and seem to imbue the air with an aura of courage. Emberhaven's history is marked by its role in supplying the kingdom's warriors with enchanted weapons and armor.", "world": "Ethoria", "kingdom": "Valdor", "npcs": {"Thrain Blackiron": {"name": "Thrain Blackiron", "description": "Thrain is a stout dwarf with a thick beard and a missing eye, replaced by a gleaming silver prosthetic. He's a master blacksmith in Emberhaven, renowned for crafting weapons that seem to hold a spark of the town's fiery passion. Despite his gruff exterior, Thrain harbors a deep pain - the loss of his family in a tragic accident - and desires to create a masterpiece that will bring him solace and redemption.", "world": "Ethoria", "kingdom": "Valdor", "town": "Emberhaven"}, "Eluned Starweaver": {"name": "Eluned Starweaver", "description": "Eluned is an ethereal, silver-haired woman with eyes that shimmer like the stars. She's a skilled astronomer and cartographer in Valdor's capital, Krael, tasked with charting the movements of celestial bodies and their influence on the kingdom's magical energies. Eluned's quiet confidence hides a deeper pain - the weight of her family's expectations - and a desire to uncover a hidden truth about the ancient magic that governs the realm.", "world": "Ethoria", "kingdom": "Valdor", "town": "Emberhaven"}, "Kieran Emberwing": {"name": "Kieran Emberwing", "description": "Kieran is a lithe, agile young man with wings of pure flame tattooed on his arms. He's a messenger and courier in Emberhaven, using his incredible speed and agility to deliver vital information across the kingdom. Kieran's carefree exterior belies a deeper pain - the burden of his family's dark past - and a desire to prove himself as more than just a messenger, to become a true hero of Valdor.", "world": "Ethoria", "kingdom": "Valdor", "town": "Emberhaven"}}}}}, "Elyria": {"name": "Elyria", "description": "Elyria, the Kingdom of Wisdom, is a land of ancient knowledge and mystical discovery. Ruled by the enigmatic Sage-King Arin, Elyria's scholars and mages delve deep into the mysteries of the cosmos, seeking to unlock the secrets of the universe. Their capital, the city of Luminaria, is a labyrinth of libraries, observatories, and laboratories, where the pursuit of wisdom is paramount.", "world": "Ethoria", "towns": {"Crystalbrook": {"name": "Crystalbrook", "description": "Located in the heart of Elyria's Crystal Mountains, Crystalbrook is a town of skilled crystal craftsmen and gemstone miners. The town's architecture is a marvel, with buildings and bridges made from intricately carved crystal that refract and reflect light. The town's history is marked by a centuries-old rivalry with the neighboring dwarven clans, who also seek to claim the mountains' precious resources.", "world": "Ethoria", "kingdom": "Elyria"}, "Mirabel": {"name": "Mirabel", "description": "Situated on the banks of Elyria's largest lake, Mirabel is a town of master shipwrights and aquatic mages. The town's wooden buildings seem to grow organically from the lake's shore, with curved roofs that evoke the shape of seashells. Mirabel's history is tied to the mysterious Lake Guardians, ancient beings who are said to have granted the town's founders the secrets of the lake's depths.", "world": "Ethoria", "kingdom": "Elyria"}, "Argentum": {"name": "Argentum", "description": "Argentum is a town of master silversmiths and inventors, nestled in the rolling hills of Elyria's Silvermist region. The town's buildings are a mix of gleaming silver spires and clockwork contraptions, with intricate gears and pendulums that seem to come alive. Argentum's history is marked by a series of brilliant innovators who have pushed the boundaries of science and magic, earning the town the nickname \"The Clockwork Capital.\"", "world": "Ethoria", "kingdom": "Elyria"}}}, "Calonia": {"name": "Calonia", "description": "Calonia, the Kingdom of Compassion, is a realm of healers and caregivers, where empathy and kindness are the guiding principles. Led by the benevolent King Maric, Calonia's people are dedicated to the welfare of all living beings, with a focus on medicine, charity, and social justice. Their capital, the city of Seren, is a haven of peace and tranquility, with gardens and hospitals that radiate a soothing aura of compassion.", "world": "Ethoria", "towns": {"Luminaria": {"name": "Luminaria", "description": "Located in the rolling hills of Calonia's countryside, Luminaria is a town of skilled artisans and inventors, where compassion meets innovation. The town is famous for its intricate clockwork devices and luminescent glasswork, which illuminate the streets and homes. The central square features a grand clock tower, said to have been designed by the kingdom's founder, King Maric himself.", "world": "Ethoria", "kingdom": "Calonia"}, "Verdanza": {"name": "Verdanza", "description": "Nestled between the Great River Calon and the Whispering Woods, Verdanza is a thriving agricultural town, where the people of Calonia cultivate the land with love and care. The town is known for its vibrant marketplaces, overflowing with fresh produce, exotic spices, and rare herbs. The ancient Tree of Life, a massive oak said to hold the secrets of the forest, stands at the town's heart, its branches sheltering the town hall and the sacred Grove of Healing.", "world": "Ethoria", "kingdom": "Calonia"}, "Corvallis": {"name": "Corvallis", "description": "Perched on the rugged coast of Calonia, Corvallis is a seaside town of fishermen, sailors, and shipwrights, where the kingdom's compassion extends to the creatures of the sea. The town's bustling harbor is home to a variety of marine life, and its people are skilled in the art of aquamancy, using their magical abilities to protect and preserve the ocean's wonders. The iconic Lighthouse of the Ancients, said to have been built by the sea gods themselves, guides ships safely into the harbor.", "world": "Ethoria", "kingdom": "Calonia"}}}}, "start": "You are Elara Brightshield, a sturdy, blonde-haired warrior with a warm smile and a suit of shining silver armor adorned with the emblem of Valdor. You stand tall, your armor gleaming in the dim light of Ravenhurst's misty forest surroundings. You feel the weight of your responsibilities as a member of the Queen's personal guard, sworn to defend the kingdom and its people with your life. Yet, beneath your confident exterior, you struggle with the pressure to live up to your family's legacy of bravery, all while longing for a sense of freedom and autonomy. You gaze out at the dark, misty forest, wondering what secrets lie hidden in the shadows."}