srbmihaicode commited on
Commit
40d04ab
·
1 Parent(s): ec391b4
Dockerfile ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.8
2
+
3
+ WORKDIR /workspace
4
+
5
+ ADD . /workspace
6
+
7
+ RUN pip install -r requirements.txt
8
+
9
+ CMD [ "python" , "/workspace/app.py" ]
10
+
11
+ RUN chown -R 42420:42420 /workspace
12
+
13
+ ENV HOME=/workspace
README.md CHANGED
@@ -1,13 +1,231 @@
1
- ---
2
- title: Journal
3
- emoji: 💬
4
- colorFrom: yellow
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 5.0.1
8
- app_file: app.py
9
- pinned: false
10
- short_description: Journal App
11
- ---
12
-
13
- An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AI Deploy - Tutorial - Deploy an app for sentiment analysis with Hugging Face and Flask
2
+
3
+ > **Note** Access to the full documentation [here](https://docs.ovh.com/gb/en/publiccloud/ai/deploy/tuto-flask-hugging-face-sentiment-analysis/).
4
+
5
+ **Last updated 3rd November, 2022.**
6
+
7
+ > **Note**
8
+ > AI Deploy is in `beta`. During the beta-testing phase, the infrastructure’s availability and data longevity are not guaranteed. Please do not use this service for applications that are in production, as this phase is not complete.
9
+ >
10
+ > AI Deploy is covered by **[OVHcloud Public Cloud Special Conditions](https://storage.gra.cloud.ovh.net/v1/AUTH_325716a587c64897acbef9a4a4726e38/contracts/d2a208c-Conditions_particulieres_OVH_Stack-WE-9.0.pdf)**.
11
+ >
12
+
13
+ ## Objective
14
+
15
+ The purpose of this tutorial is to show you how to deploy a web service for sentiment analysis on text using Hugging Face pretrained models.<br>
16
+ In order to do this, you will use Flask, an open-source micro framework for web development in Python. You will also learn how to build and use a custom Docker image for a Flask application.
17
+
18
+ Overview of the app:
19
+
20
+ ![Hugging Face Overview](images/flask-hugging-face-overview.png){.thumbnail}
21
+
22
+ For more information about Hugging Face, please visit <https://huggingface.co/>.
23
+
24
+ ## Requirements
25
+
26
+ - Access to the [OVHcloud Control Panel](https://www.ovh.com/auth/?action=gotomanager&from=https://www.ovh.co.uk/&ovhSubsidiary=GB);
27
+ - An AI Deploy project created inside a [Public Cloud project](https://www.ovhcloud.com/en-gb/public-cloud/) in your OVHcloud account;
28
+ - A [user for AI Deploy](https://docs.ovh.com/gb/en/publiccloud/ai/users/);
29
+ - [Docker](https://www.docker.com/get-started) installed on your local computer;
30
+ - Some knowledge about building image and [Dockerfile](https://docs.docker.com/engine/reference/builder/);
31
+
32
+ We also suggest you do some tests to find out which [Hugging Face model](https://huggingface.co/models) is right for your use case. Find examples on our [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/notebooks/natural-language-processing/text-classification/hugging-face/sentiment-analysis-twitter).
33
+
34
+ ## Instructions
35
+
36
+ First, the tree structure of your folder should be as follows:
37
+
38
+ ![Flask tree structure](images/tree-flask-app.png)
39
+
40
+ Find more information about the Flask application [here](https://flask.palletsprojects.com/en/2.0.x/quickstart/#a-minimal-application) to get ready to use it.
41
+
42
+ ### Write the Flask application
43
+
44
+ Create a Python file named `app.py`.
45
+
46
+ Inside that file, import your required modules:
47
+
48
+ ```python
49
+ from flask import Flask, jsonify, render_template, request, make_response
50
+ import transformers
51
+ ```
52
+
53
+ Create Flask app:
54
+
55
+ ```python
56
+ app = Flask(__name__)
57
+ ```
58
+
59
+ Load Hugging Face models:
60
+
61
+ ```python
62
+ # create a python dictionary for your models d = {<key>: <value>, <key>: <value>, ..., <key>: <value>}
63
+ dictOfModels = {"RoBERTa" : transformers.pipeline("sentiment-analysis", model="siebert/sentiment-roberta-large-english"), "BERT" : transformers.pipeline('sentiment-analysis', model="nlptown/bert-base-multilingual-uncased-sentiment")}
64
+ # create a list of keys to use them in the select part of the html code
65
+ listOfKeys = []
66
+ for key in dictOfModels :
67
+ listOfKeys.append(key)
68
+ ```
69
+
70
+ Write the inference function:
71
+
72
+ ```python
73
+ def get_prediction(message,model):
74
+ # inference
75
+ results = model(message)
76
+ return results
77
+ ```
78
+
79
+ Define the GET method:
80
+
81
+ ```python
82
+ @app.route('/', methods=['GET'])
83
+ def get():
84
+ # in the select we will have each key of the list in option
85
+ return render_template("home.html", len = len(listOfKeys), listOfKeys = listOfKeys)
86
+ ```
87
+
88
+ Define the POST method:
89
+
90
+ ```python
91
+ @app.route('/', methods=['POST'])
92
+ def predict():
93
+ message = request.form['message']
94
+ # choice of the model
95
+ results = get_prediction(message, dictOfModels[request.form.get("model_choice")])
96
+ print(f'User selected model : {request.form.get("model_choice")}')
97
+ my_prediction = f'The feeling of this text is {results[0]["label"]} with probability of {results[0]["score"]*100}%.'
98
+ return render_template('result.html', text = f'{message}', prediction = my_prediction)
99
+ ```
100
+
101
+ Start your app:
102
+
103
+ ```python
104
+ if __name__ == '__main__':
105
+ # starting app
106
+ app.run(debug=True,host='0.0.0.0')
107
+ ```
108
+
109
+ ### Write the requirements.txt file for the application
110
+
111
+ The `requirements.txt` file will allow us to write all the modules needed to make our application work. This file will be useful when writing the `Dockerfile`.
112
+
113
+ ```console
114
+ Flask==1.1.2
115
+ transformers==4.4.2
116
+ torch==1.6.0
117
+ ```
118
+
119
+ Here we will mainly discuss how to write the `app.py` code, the `requirements.txt` file and the `Dockerfile`. If you want to see the whole code, please refer to the [GitHub repository](https://github.com/ovh/ai-training-examples/tree/main/apps/flask/sentiment-analysis-hugging-face-app).
120
+
121
+ ### Write the Dockerfile for the application
122
+
123
+ Your `Dockerfile` should start with the `FROM` instruction indicating the parent image to use. In our case we choose to start from a Python image:
124
+
125
+ ```console
126
+ FROM python:3.8
127
+ ```
128
+
129
+ Create the home directory and add your files to it:
130
+
131
+ ```console
132
+ WORKDIR /workspace
133
+ ADD . /workspace
134
+ ```
135
+
136
+ Install the `requirements.txt` file which contains your needed Python modules using a `pip install ...` command:
137
+
138
+ ```console
139
+ RUN pip install -r requirements.txt
140
+ ```
141
+
142
+ Define your default launching command to start the application:
143
+
144
+ ```console
145
+ CMD [ "python" , "/workspace/app.py" ]
146
+ ```
147
+
148
+ Give correct access rights to **ovhcloud user** (`42420:42420`):
149
+
150
+ ```console
151
+ RUN chown -R 42420:42420 /workspace
152
+ ENV HOME=/workspace
153
+ ```
154
+
155
+ ### Build the Docker image from the Dockerfile
156
+
157
+ Launch the following command from the **Dockerfile** directory to build your application image:
158
+
159
+ ```console
160
+ docker build . -t sentiment_analysis_app:latest
161
+ ```
162
+
163
+ > **Note**
164
+ > The dot `.` argument indicates that your build context (place of the **Dockerfile** and other needed files) is the current directory.
165
+ >
166
+ > The `-t` argument allows you to choose the identifier to give to your image. Usually image identifiers are composed of a **name** and a **version tag** `<name>:<version>`. For this example we chose **sentiment_analysis_app:latest**.
167
+ >
168
+
169
+ ### Test it locally (optional)
170
+
171
+ Launch the following **Docker command** to launch your application locally on your computer:
172
+
173
+ ```console
174
+ docker run --rm -it -p 5000:5000 --user=42420:42420 sentiment_analysis_app:latest
175
+ ```
176
+
177
+ > **Note**
178
+ > The `-p 5000:5000` argument indicates that you want to execute a port redirection from the port **5000** of your local machine into the port **5000** of the Docker container. The port **5000** is the default port used by **Flask** applications.
179
+ >
180
+
181
+ > **Warning**
182
+ > Don't forget the `--user=42420:42420` argument if you want to simulate the exact same behaviour that will occur on **AI Deploy apps**. It executes the Docker container as the specific OVHcloud user (user **42420:42420**).
183
+ >
184
+
185
+ Once started, your application should be available on `http://localhost:5000`.
186
+
187
+ ### Push the image into the shared registry
188
+
189
+ > **Warning**
190
+ > The shared registry of AI Deploy should only be used for testing purposes. Please consider attaching your own Docker registry. More information about this can be found [here](https://docs.ovh.com/gb/en/publiccloud/ai/training/add-private-registry/).
191
+ >
192
+
193
+ Find the adress of your shared registry by launching this command:
194
+
195
+ ```console
196
+ ovhai registry list
197
+ ```
198
+
199
+ Login on the shared registry with your usual OpenStack credentials:
200
+
201
+ ```console
202
+ docker login -u <user> -p <password> <shared-registry-address>
203
+ ```
204
+
205
+ Push the compiled image into the shared registry:
206
+
207
+ ```console
208
+ docker tag sentiment_analysis_app:latest <shared-registry-address>/sentiment_analysis_app:latest
209
+ docker push <shared-registry-address>/sentiment_analysis_app:latest
210
+ ```
211
+
212
+ ### Launch the AI Deploy app
213
+
214
+ The following command starts a new app running your Flask application:
215
+
216
+ ```console
217
+ ovhai app run --default-http-port 5000 --cpu 4 <shared-registry-address>/sentiment_analysis_app:latest
218
+ ```
219
+
220
+ > **Note**
221
+ > `--default-http-port 5000` indicates that the port to reach on the app URL is the `5000`.
222
+ >
223
+ > `--cpu 4` indicates that we request 4 CPUs for that app.
224
+ >
225
+ > Consider adding the `--unsecure-http` attribute if you want your application to be reachable without any authentication.
226
+ >
227
+
228
+ ## Go further
229
+
230
+ - You can also imagine deploying an Object Detection model with **Flask** in this [tutorial](https://docs.ovh.com/gb/en/publiccloud/ai/deploy/web-service-yolov5/).
231
+ - Discover an other tool to deploy easily AI models: **Gradio**. Refer to this [documentation](https://docs.ovh.com/gb/en/publiccloud/ai/deploy/tuto-gradio-sketch-recognition/).
app.py CHANGED
@@ -1,61 +1,42 @@
1
- from flask import Flask, request, jsonify
2
- from huggingface_hub import InferenceClient
 
3
 
4
- # Initialize Flask app and Hugging Face client
5
  app = Flask(__name__)
6
- client = InferenceClient("HuggingFaceH4/zephyr-7b-beta")
7
-
8
- # Helper function to generate a response from the AI model
9
- def generate_response(message, history, system_message, max_tokens, temperature, top_p):
10
- messages = [{"role": "system", "content": system_message}]
11
-
12
- for val in history:
13
- if val[0]:
14
- messages.append({"role": "user", "content": val[0]})
15
- if val[1]:
16
- messages.append({"role": "assistant", "content": val[1]})
17
-
18
- messages.append({"role": "user", "content": message})
19
- response = ""
20
-
21
- # Streaming response from the Hugging Face model
22
- for message in client.chat_completion(
23
- messages,
24
- max_tokens=max_tokens,
25
- stream=True,
26
- temperature=temperature,
27
- top_p=top_p,
28
- ):
29
- token = message.choices[0].delta.content
30
- response += token
31
-
32
- return response
33
-
34
- @app.route("/chat", methods=["POST"])
35
- def home():
36
- return "Hi!"
37
- # API endpoint to handle requests
38
- @app.route("/chat", methods=["POST"])
39
- def chat():
40
- try:
41
- data = request.json
42
- message = data.get("message", "")
43
- history = data.get("history", [])
44
- system_message = data.get("system_message", "You are a friendly chatbot.")
45
- max_tokens = data.get("max_tokens", 512)
46
- temperature = data.get("temperature", 0.7)
47
- top_p = data.get("top_p", 0.95)
48
-
49
- # Validate inputs
50
- if not isinstance(history, list) or not all(isinstance(pair, list) for pair in history):
51
- return jsonify({"error": "Invalid history format. It should be a list of [message, response] pairs."}), 400
52
-
53
- # Generate AI response
54
- response = generate_response(message, history, system_message, max_tokens, temperature, top_p)
55
-
56
- return jsonify({"response": response})
57
- except Exception as e:
58
- return jsonify({"error": str(e)}), 500
59
-
60
- if __name__ == "__main__":
61
- app.run(debug=True)
 
1
+ # import objects from the Flask model
2
+ from flask import Flask, jsonify, render_template, request, make_response
3
+ import transformers
4
 
5
+ # creating flask app
6
  app = Flask(__name__)
7
+
8
+ # create a python dictionary for your models d = {<key>: <value>, <key>: <value>, ..., <key>: <value>}
9
+ dictOfModels = {"BERT" : transformers.pipeline('sentiment-analysis', model="nlptown/bert-base-multilingual-uncased-sentiment")} # feel free to add several models
10
+
11
+ listOfKeys = []
12
+ for key in dictOfModels :
13
+ listOfKeys.append(key)
14
+
15
+ # inference fonction
16
+ def get_prediction(message,model):
17
+ # inference
18
+ results = model(message)
19
+ return results
20
+
21
+ # get method
22
+ @app.route('/', methods=['GET'])
23
+ def get():
24
+ # in the select we will have each key of the list in option
25
+ return render_template("home.html", len = len(listOfKeys), listOfKeys = listOfKeys)
26
+
27
+
28
+ # post method
29
+ @app.route('/', methods=['POST'])
30
+ def predict():
31
+ message = request.form['message']
32
+
33
+ # choice of the model
34
+ results = get_prediction(message, dictOfModels[request.form.get("model_choice")])
35
+ print(f'User selected model : {request.form.get("model_choice")}')
36
+ my_prediction = f'The feeling of this text is {results[0]["label"]} with probability of {results[0]["score"]*100}%.'
37
+
38
+ return render_template('result.html', text = f'{message}', prediction = my_prediction)
39
+
40
+ if __name__ == '__main__':
41
+ # starting app
42
+ app.run(debug=True,host='0.0.0.0')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
images/flask-hugging-face-overview.png ADDED
images/tree-flask-app.png ADDED
requirements.txt CHANGED
@@ -1,2 +1,5 @@
1
- huggingface_hub==0.25.2
2
- flask==3.1.0
 
 
 
 
1
+ Flask==2.1.0
2
+
3
+ transformers==4.24.0
4
+
5
+ torch==1.6.0
static/flag.png ADDED
static/style.css ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ html,
2
+ body {
3
+ height: 100%;
4
+ }
5
+
6
+ body {
7
+ text-align: left;
8
+ margin: 0px;
9
+ padding: 0px;
10
+ background-color: #ffffff;
11
+ }
12
+
13
+ form {
14
+ text-align: left;
15
+ height: 100%;
16
+ }
17
+
18
+ .container {
19
+ padding: 10px;
20
+ }
21
+
22
+ .text_bottom {
23
+ bottom: 0;
24
+ }
25
+
26
+ #demo {
27
+ color: #111;
28
+ font-family: 'Helvetica Neue', sans-serif;
29
+ font-size: 22px;
30
+ line-height: 24px;
31
+ margin: 0 0 24px;
32
+ text-justify: inter-word;
33
+ }
templates/home.html ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!doctype html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="utf-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
6
+ <link rel="stylesheet" href="//stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous">
7
+ <style>
8
+ .bd-placeholder-img {
9
+ font-size: 1.125rem;
10
+ text-anchor: start;
11
+ }
12
+
13
+ @media (min-width: 768px) {
14
+ .bd-placeholder-img-lg {
15
+ font-size: 150%;
16
+ }
17
+ }
18
+ </style>
19
+ <link rel="stylesheet" href="/static/style.css">
20
+ <title>Sentiment analysis on Tweets using Hugging Face</title>
21
+ </head>
22
+ <body>
23
+ <form method=post enctype=multipart/form-data>
24
+ <img src="static/flag.png" width="100%">
25
+ <div style="height:80px"></div>
26
+ <div class="container">
27
+ <h3 style= "color:#030d9b"><i>OVHcloud - Sentiment analysis on Tweets using Hugging Face</i></h3>
28
+ <hr>
29
+ <div style="height:50px"></div>
30
+ <h5>Write your text (in english): </h5>
31
+ <textarea name="message" rows="4" cols="50"></textarea>
32
+ <br/>
33
+ <div style= "height:40px"></div>
34
+ <h5>Select Hugging Face model: </h5>
35
+ <select name="model_choice">
36
+ {%for i in range(0, len)%}
37
+ <option>{{listOfKeys[i]}}</option>
38
+ {%endfor%}
39
+ </select>
40
+ <div style= "height:50px"></div>
41
+ <button class="btn btn-lg btn-primary btn-block" type="submit">Submit</button>
42
+ <div style= "height:100px"></div>
43
+ <p align="center">If you want to know more about AI Training, go <a href="https://www.ovhcloud.com/fr/public-cloud/ai-training/">here</a>.</p>
44
+ </form>
45
+ <script src="//code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
46
+ <script src="//cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.6/umd/popper.min.js" integrity="sha384-wHAiFfRlMFy6i5SRaxvfOCifBUQy1xHdJ/yoi7FRNXMRBu5WHdZYu1hA6ZOblgut" crossorigin="anonymous"></script>
47
+ <script src="//stackpath.bootstrapcdn.com/bootstrap/4.2.1/js/bootstrap.min.js" integrity="sha384-B0UglyR+jN6CkvvICOB2joaf5I4l3gm9GU6Hc1og6Ls7i6U/mkkaduKaBhlAXv9k" crossorigin="anonymous"></script>
48
+ </body>
49
+ </html>
templates/result.html ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!doctype html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="utf-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
6
+ <link rel="stylesheet" href="//stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous">
7
+ <style>
8
+ .bd-placeholder-img {
9
+ font-size: 1.125rem;
10
+ text-anchor: start;
11
+ }
12
+
13
+ @media (min-width: 768px) {
14
+ .bd-placeholder-img-lg {
15
+ font-size: 150%;
16
+ }
17
+ }
18
+ </style>
19
+ <link rel="stylesheet" href="/static/style.css">
20
+ <title>Sentiment analysis on Tweets using Hugging Face</title>
21
+ </head>
22
+ <body>
23
+ <form method=post enctype=multipart/form-data>
24
+ <img src="static/flag.png" width="100%">
25
+ <div style="height:80px"></div>
26
+ <div class="container">
27
+ <h3 style= "color:#030d9b"><i>OVHcloud - Sentiment analysis on Tweets using Hugging Face</i></h3>
28
+ <hr>
29
+ <div style="height:50px"></div>
30
+ <h4 style= "color:#030d9b">Your text:</h4>
31
+ <p id="demo">{{ text }}</p>
32
+ <div style="height:50px"></div>
33
+ <h4 style= "color:#030d9b">Result:</h4>
34
+ <p id="demo">{{ prediction }}</p>
35
+ </div>
36
+ <div style= "height:200px"></div>
37
+ <p align="center">If you want to know more about AI Training, go <a href="https://www.ovhcloud.com/fr/public-cloud/ai-training/">here</a>.</p>
38
+ </form>
39
+ <script src="//code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
40
+ <script src="//cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.6/umd/popper.min.js" integrity="sha384-wHAiFfRlMFy6i5SRaxvfOCifBUQy1xHdJ/yoi7FRNXMRBu5WHdZYu1hA6ZOblgut" crossorigin="anonymous"></script>
41
+ <script src="//stackpath.bootstrapcdn.com/bootstrap/4.2.1/js/bootstrap.min.js" integrity="sha384-B0UglyR+jN6CkvvICOB2joaf5I4l3gm9GU6Hc1og6Ls7i6U/mkkaduKaBhlAXv9k" crossorigin="anonymous"></script>
42
+ </body>
43
+ </html>
tree-flask-app.png ADDED