Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,93 @@
|
|
1 |
---
|
2 |
license: cc0-1.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc0-1.0
|
3 |
+
task_categories:
|
4 |
+
- sentence-similarity
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pretty_name: '"Movie descriptors for Semantic Search"'
|
8 |
+
size_categories:
|
9 |
+
- 10K<n<100K
|
10 |
+
tags:
|
11 |
+
- movies
|
12 |
+
- embeddings
|
13 |
+
- semantic search
|
14 |
+
- films
|
15 |
+
- hpi
|
16 |
+
- workshop
|
17 |
---
|
18 |
+
# Dataset Card
|
19 |
+
|
20 |
+
This dataset is a subset from Kaggle's The Movie Dataset that contains only name, release year and overview for some movies from the original dataset.
|
21 |
+
It is intended as a toy dataset for learning about embeddings in a workshop from the AI Service Center Berlin-Brandenburg at the Hasso Plattner Institute.
|
22 |
+
|
23 |
+
This dataset has a bigger version [here](https://huggingface.co/datasets/mt0rm0/movie_descriptors).
|
24 |
+
|
25 |
+
## Dataset Details
|
26 |
+
|
27 |
+
### Dataset Description
|
28 |
+
|
29 |
+
The dataset has 28655 rows and 3 columns:
|
30 |
+
|
31 |
+
- 'name': includes the title of the movies
|
32 |
+
- 'release_year': indicates the year of release
|
33 |
+
- 'overview': provides a brief description of each movie, used for advertisement.
|
34 |
+
|
35 |
+
The source dataset was filtered for keeping only movies with complete metadata in the required fields, a vote average of at least 6, with more than 100 votes and with a revenue over 2 Million Dollars.
|
36 |
+
|
37 |
+
**Curated by:** [Mario Tormo Romero](https://huggingface.co/mt0rm0)
|
38 |
+
|
39 |
+
**Language(s) (NLP):** English
|
40 |
+
|
41 |
+
**License:** cc0-1.0
|
42 |
+
|
43 |
+
### Dataset Sources
|
44 |
+
This Dataset is a subset of Kaggle's [The Movie Dataset](https://www.kaggle.com/datasets/rounakbanik/the-movies-dataset).
|
45 |
+
We have only used the <kbd>movies_metadata.csv</kbd> file and extracted some features (see Dataset Description) and dropped the rows that didn't were complete.
|
46 |
+
|
47 |
+
The original Dataset has a cc0-1.0 License and we have maintained it.
|
48 |
+
|
49 |
+
## Uses
|
50 |
+
|
51 |
+
This is a toy dataset created for pegagogical purposes, and is used in the **Working with embeddings** Workshop created and organized by the [AI Service Center Berlin-Brandenburg](https://hpi.de/kisz/) at the [Hasso Plattner Institute](https://hpi.de/).
|
52 |
+
|
53 |
+
## Dataset Creation
|
54 |
+
|
55 |
+
### Curation Rationale
|
56 |
+
|
57 |
+
We want to provide with this dataset a fast way of obtaining the required data for our workshops without having to download huge datasets with just way too much information.
|
58 |
+
|
59 |
+
### Source Data
|
60 |
+
|
61 |
+
Our source is Kaggle's The Movie Dataset, so the information comes from the MovieLens Dataset. The dataset consists of movies released on or before July 2017.
|
62 |
+
|
63 |
+
#### Data Collection and Processing
|
64 |
+
|
65 |
+
The data was downloaded from [Kaggle](https://www.kaggle.com/datasets/rounakbanik/the-movies-dataset) as a zip file. The file <kbd>movies_metadata.csv</kbd> was then extracted.
|
66 |
+
The data was processed with the following code:
|
67 |
+
```python
|
68 |
+
import pandas as pd
|
69 |
+
# load the csv file
|
70 |
+
df = pd.read_csv("movies_metadata.csv", low_memory=False)
|
71 |
+
|
72 |
+
# filter movies according to:
|
73 |
+
# - vote average of at least 6
|
74 |
+
# - more than 100 votes
|
75 |
+
# - revenue over 2M$
|
76 |
+
df = df.loc[(df.vote_average >= 6)&(df.vote_count > 100)&(df.revenue > 2e6)]
|
77 |
+
|
78 |
+
# select the required columns, drop rows with missing values and
|
79 |
+
# reset the index
|
80 |
+
df = df.loc[:, ['title', 'release_date', 'overview']]
|
81 |
+
df = df.dropna(axis=0).reset_index(drop=True)
|
82 |
+
|
83 |
+
# make a new column with the release year
|
84 |
+
df.loc[:, 'release_year'] = pd.to_datetime(df.release_date).dt.year
|
85 |
+
# select the columns in the desired order
|
86 |
+
df = df.loc[:, ['title', 'release_year', 'overview']]
|
87 |
+
|
88 |
+
# save the data to parquet
|
89 |
+
df.to_parquet('descriptors_data.parquet')
|
90 |
+
```
|
91 |
+
#### Who are the source data producers?
|
92 |
+
The source dataset is an ensemble of data collected by [Rounak Banik](https://www.kaggle.com/rounakbanik) from TMDB and GroupLens.
|
93 |
+
In particular, the movies metadata has been collected from the TMDB Open API, but the source dataset is not endorsed or certified by TMDb.
|