|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- question-answering |
|
- text-generation |
|
language: |
|
- en |
|
tags: |
|
- vector search |
|
- semantic search |
|
- retrieval augmented generation |
|
pretty_name: hackernoon_tech_news_with_embeddings |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
## Overview |
|
[HackerNoon](https://huggingface.co/datasets/HackerNoon/tech-company-news-data-dump/tree/main) curated the internet's most cited 7M+ tech company news articles and blog posts about the 3k+ most valuable tech companies in 2022 and 2023. |
|
|
|
To further enhance the dataset's utility, a new embedding field and vector embedding for every datapoint have been added using the OpenAI EMBEDDING_MODEL = "text-embedding-3-small", with an EMBEDDING_DIMENSION of 256. |
|
|
|
**Notably, this extension with vector embeddings only contains a portion of the original dataset, 1576528 data points, focusing on enriching a selected subset with advanced analytical capabilities.** |
|
|
|
## Dataset Structure |
|
Each record in the dataset represents a news article about technology companies and includes the following fields: |
|
|
|
- _id: A unique identifier for the news article. |
|
- companyName: The name of the company the news article is about. |
|
- companyUrl: A URL to the HackerNoon company profile page for the company. |
|
- published_at: The date and time when the news article was published. |
|
- url: A URL to the original news article. |
|
- title: The title of the news article. |
|
- main_image: A URL to the main image of the news article. |
|
- description: A brief summary of the news article's content. |
|
- embedding: An array of numerical values representing the vector embedding for the article, generated using the OpenAI EMBEDDING_MODEL. |
|
|
|
|
|
## Data Ingestion (Partioned) |
|
[Create a free MongoDB Atlas Account](https://www.mongodb.com/cloud/atlas/register?utm_campaign=devrel&utm_source=community&utm_medium=organic_social&utm_content=Hugging%20Face%20Dataset&utm_term=richmond.alake) |
|
|
|
```python |
|
import os |
|
import requests |
|
import pandas as pd |
|
from io import BytesIO |
|
from pymongo import MongoClient |
|
|
|
# MongoDB Atlas URI and client setup |
|
uri = os.environ.get('MONGODB_ATLAS_URI') |
|
|
|
client = MongoClient(uri) |
|
|
|
# Change to the appropriate database and collection names for the tech news embeddings |
|
db_name = 'your_database_name' # Change this to your actual database name |
|
collection_name = 'tech_news_embeddings' # Change this to your actual collection name |
|
tech_news_embeddings_collection = client[db_name][collection_name] |
|
|
|
hf_token = os.environ.get('HF_TOKEN') |
|
headers = { |
|
"Authorization": f"Bearer {hf_token}" |
|
} |
|
|
|
# Downloads 228012 data points |
|
parquet_files = [ |
|
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0000.parquet", |
|
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0001.parquet", |
|
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0002.parquet", |
|
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0003.parquet", |
|
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0004.parquet", |
|
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0005.parquet", |
|
] |
|
|
|
all_dataframes = [] |
|
combined_df = None |
|
|
|
for parquet_file_url in parquet_files: |
|
response = requests.get(parquet_file_url, headers=headers) |
|
if response.status_code == 200: |
|
|
|
parquet_bytes = BytesIO(response.content) |
|
df = pd.read_parquet(parquet_bytes) |
|
all_dataframes.append(df) |
|
else: |
|
print(f"Failed to download Parquet file from {parquet_file_url}: {response.status_code}") |
|
|
|
if all_dataframes: |
|
combined_df = pd.concat(all_dataframes, ignore_index=True) |
|
else: |
|
print("No dataframes to concatenate.") |
|
|
|
|
|
# Ingest to database |
|
dataset_records = combined_df.to_dict('records') |
|
tech_news_embeddings_collection.insert_many(dataset_records) |
|
``` |
|
|
|
|
|
## Data Ingestion (All Records) |
|
[Create a free MongoDB Atlas Account](https://www.mongodb.com/cloud/atlas/register?utm_campaign=devrel&utm_source=community&utm_medium=organic_social&utm_content=Hugging%20Face%20Dataset&utm_term=richmond.alake) |
|
|
|
```python |
|
import os |
|
from pymongo import MongoClient |
|
import datasets |
|
from datasets import load_dataset |
|
from bson import json_util |
|
|
|
# MongoDB Atlas URI and client setup |
|
uri = os.environ.get('MONGODB_ATLAS_URI') |
|
client = MongoClient(uri) |
|
|
|
# Change to the appropriate database and collection names for the tech news embeddings |
|
db_name = 'your_database_name' # Change this to your actual database name |
|
collection_name = 'tech_news_embeddings' # Change this to your actual collection name |
|
|
|
tech_news_embeddings_collection = client[db_name][collection_name] |
|
|
|
# Load the "tech-news-embeddings" dataset from Hugging Face |
|
dataset = load_dataset("AIatMongoDB/tech-news-embeddings") |
|
|
|
insert_data = [] |
|
|
|
# Iterate through the dataset and prepare the documents for insertion |
|
# The script below ingests 1000 records into the database at a time |
|
for item in dataset['train']: |
|
# Convert the dataset item to MongoDB document format |
|
doc_item = json_util.loads(json_util.dumps(item)) |
|
insert_data.append(doc_item) |
|
|
|
# Insert in batches of 1000 documents |
|
if len(insert_data) == 1000: |
|
tech_news_embeddings_collection.insert_many(insert_data) |
|
print("1000 records ingested") |
|
insert_data = [] |
|
|
|
# Insert any remaining documents |
|
if len(insert_data) > 0: |
|
tech_news_embeddings_collection.insert_many(insert_data) |
|
print("Data Ingested") |
|
|
|
``` |
|
|
|
## Usage |
|
The dataset is suited for a range of applications, including: |
|
|
|
- Tracking and analyzing trends in the tech industry. |
|
- Enhancing search and recommendation systems for tech news content with the use of vector embeddings. |
|
- Conducting sentiment analysis and other natural language processing tasks to gauge public perception and impact of news on specific tech companies. |
|
- Educational purposes in data science, journalism, and technology studies courses. |
|
|
|
## Notes |
|
|
|
|
|
### Sample Document |
|
``` |
|
{ |
|
"_id": { |
|
"$oid": "65c63ea1f187c085a866f680" |
|
}, |
|
"companyName": "01Synergy", |
|
"companyUrl": "https://hackernoon.com/company/01synergy", |
|
"published_at": "2023-05-16 02:09:00", |
|
"url": "https://www.businesswire.com/news/home/20230515005855/en/onsemi-and-Sineng-Electric-Spearhead-the-Development-of-Sustainable-Energy-Applications/", |
|
"title": "onsemi and Sineng Electric Spearhead the Development of Sustainable Energy Applications", |
|
"main_image": "https://firebasestorage.googleapis.com/v0/b/hackernoon-app.appspot.com/o/images%2Fimageedit_25_7084755369.gif?alt=media&token=ca7527b0-a214-46d4-af72-1062b3df1458", |
|
"description": "(Nasdaq: ON) a leader in intelligent power and sensing technologies today announced that Sineng Electric will integrate onsemi EliteSiC silic", |
|
"embedding": [ |
|
{ |
|
"$numberDouble": "0.05243798345327377" |
|
}, |
|
{ |
|
"$numberDouble": "-0.10347484797239304" |
|
}, |
|
{ |
|
"$numberDouble": "-0.018149614334106445" |
|
} |
|
] |
|
} |
|
``` |