{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 20NG (Twenty Newsgroups). Preprocessing\n", "\n", "Here goes an example of data preprocessing and converting it to TopicNet's Dataset format.\n", "\n", "* Example of a toy dataset: [test_dataset.csv](https://github.com/machine-intelligence-laboratory/TopicNet/blob/master/topicnet/tests/test_data/test_dataset.csv)\n", "* Dataset source file (with some explanations in docstring): [dataset.py](https://github.com/machine-intelligence-laboratory/TopicNet/blob/master/topicnet/cooking_machine/dataset.py)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Contents\n", "\n", "* [Loading data](#data-loading)\n", "* [Preparing data](#data-preparation)" ] }, { "cell_type": "code", "execution_count": 45, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import re\n", "import shutil\n", "import string\n", "\n", "from collections import Counter\n", "from glob import glob\n", "\n", "from sklearn import datasets\n", "from sklearn.datasets import fetch_20newsgroups" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import nltk\n", "\n", "from nltk.collocations import (\n", " BigramAssocMeasures,\n", " BigramCollocationFinder,\n", ")\n", "from nltk.corpus import (\n", " stopwords,\n", " wordnet,\n", ")\n", "from nltk.stem import WordNetLemmatizer" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "\n", "from matplotlib import cm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Loading data\n", "\n", "
Back to Contents
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's download the dataset:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Downloading 20news dataset. This may take a few minutes.\n", "Downloading dataset from https://ndownloader.figshare.com/files/5975967 (14 MB)\n" ] } ], "source": [ "train_20 = fetch_20newsgroups(\n", " subset='train',\n", " remove=('headers', 'footers', 'quotes'),\n", ")\n", "test_20 = fetch_20newsgroups(\n", " subset='test',\n", " remove=('headers', 'footers', 'quotes'),\n", ")" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "11314 data\n", "11314 filenames\n", "11314 target\n" ] } ], "source": [ "train_20.pop('DESCR')\n", "labels = train_20.pop('target_names')\n", "\n", "for k in train_20.keys():\n", " print(len(train_20[k]), k)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "7532 data\n", "7532 filenames\n", "7532 target\n" ] } ], "source": [ "test_20.pop('DESCR')\n", "labels_test = test_20.pop('target_names')\n", "\n", "for k in test_20.keys():\n", " print(len(test_20[k]), k)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparing data (lemmatization, Vowpal Wabbit & TopicNet's format)\n", "\n", "
Back to Contents
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wrapping all in .csv files:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "train_pd = pd.DataFrame(train_20).rename(columns = {'data':'raw_text'},)\n", "# train_pd['raw_text'] = train_pd['raw_text'].apply(lambda x: x.decode('windows-1252'))\n", "train_pd['id'] = train_pd.filenames.apply( lambda x: '.'.join(x.split('/')[-2:]).replace('.','_'))\n", "\n", "test_pd = pd.DataFrame(test_20).rename(columns = {'data':'raw_text'})\n", "# test_pd['raw_text'] = test_pd['raw_text'].apply(lambda x: x.decode('windows-1252'))\n", "test_pd['id'] = test_pd.filenames.apply( lambda x: '.'.join(x.split('/')[-2:]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Better to exclude these documents (one may look here [20-newsgroups-secrets](https://github.com/Alvant/20-newsgroups-secrets) for more details)." ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "bad_names = [9976, 9977, 9978, 9979, 9980, 9981, 9982, 9983, 9984, 9985, 9986, 9987, 9988, 9990]\n", "bad_names = [f\"comp_os_ms-windows_misc_{i}\" for i in bad_names]\n", "\n", "bad_indices = train_pd.query(\"id in @bad_names\").index" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Below we define some functions for text preprocessing." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "def nltk2wn_tag(nltk_tag):\n", " if nltk_tag.startswith('J'):\n", " return wordnet.ADJ\n", " elif nltk_tag.startswith('V'):\n", " return wordnet.VERB\n", " elif nltk_tag.startswith('N'):\n", " return wordnet.NOUN\n", " elif nltk_tag.startswith('R'):\n", " return wordnet.ADV\n", " else: \n", " return ''" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pattern = re.compile('\\S*@\\S*\\s?')" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "def vowpalize_sequence(sequence):\n", " word_2_frequency = Counter(sequence)\n", " \n", " del word_2_frequency['']\n", " \n", " vw_string = ''\n", " \n", " for word in word_2_frequency:\n", " vw_string += word + \":\" + str(word_2_frequency[word]) + ' '\n", " \n", " return vw_string\n", "\n", "def do_vw_for_me_please(dataframe):\n", " bad_entries = []\n", " tokenized_text = []\n", " \n", " for indx, text in enumerate(dataframe['raw_text'].values):\n", " try:\n", " text = str(pattern.sub('', text))\n", " except TypeError:\n", " text=''\n", " \n", " tokens = [tok for tok in nltk.wordpunct_tokenize(text.lower()) if len(tok) > 1]\n", " tokenized_text.append(nltk.pos_tag(tokens))\n", " \n", " dataframe['tokenized'] = tokenized_text\n", "\n", " stop = set(stopwords.words('english'))\n", "\n", " lemmatized_text = []\n", " wnl = WordNetLemmatizer()\n", " \n", " for text in dataframe['tokenized'].values:\n", " lemmatized = [wnl.lemmatize(word, nltk2wn_tag(pos))\n", " if nltk2wn_tag(pos) != ''\n", " else wnl.lemmatize(word)\n", " for word, pos in text ]\n", " lemmatized = [word for word in lemmatized \n", " if word not in stop and word.isalpha()]\n", " lemmatized_text.append(lemmatized)\n", " \n", " dataframe['lemmatized'] = lemmatized_text\n", "\n", " bigram_measures = BigramAssocMeasures()\n", " finder = BigramCollocationFinder.from_documents(dataframe['lemmatized'])\n", " finder.apply_freq_filter(5)\n", " set_dict = set(finder.nbest(bigram_measures.pmi,32100)[100:])\n", " documents = dataframe['lemmatized']\n", " bigrams = []\n", "\n", " for doc in documents:\n", " entry = ['_'.join([word_first, word_second])\n", " for word_first, word_second in zip(doc[:-1],doc[1:])\n", " if (word_first, word_second) in set_dict]\n", " bigrams.append(entry)\n", "\n", " dataframe['bigram'] = bigrams\n", " \n", " vw_text = []\n", "\n", " for index, data in dataframe.iterrows():\n", " vw_string = '' \n", " doc_id = data.id\n", " lemmatized = '@lemmatized ' + vowpalize_sequence(data.lemmatized)\n", " bigram = '@bigram ' + vowpalize_sequence(data.bigram)\n", " vw_string = ' |'.join([doc_id, lemmatized, bigram])\n", " vw_text.append(vw_string)\n", "\n", " dataframe['vw_text'] = vw_text\n", "\n", " print('num bad entries ', len(bad_entries))\n", " print(bad_entries)\n", "\n", " return dataframe" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here are the final datasets!\n", "Each row represents a document.\n", "Columns `id`, `raw_text` and `vw_text` are required (look at this [toy dataset](https://github.com/machine-intelligence-laboratory/TopicNet/blob/master/topicnet/tests/test_data/test_dataset.csv), for example)." ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "num bad entries 0\n", "[]\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
raw_textfilenamestargetidtokenizedlemmatizedbigramvw_text
0I was wondering if anyone out there could enli.../home/bulatov/scikit_learn_data/20news_home/20...7rec_autos_102994[(was, VBD), (wondering, VBG), (if, IN), (anyo...[wonder, anyone, could, enlighten, car, saw, d...[wonder_anyone, anyone_could, sport_car, car_l...rec_autos_102994 |@lemmatized wonder:1 anyone:...
1A fair number of brave souls who upgraded thei.../home/bulatov/scikit_learn_data/20news_home/20...4comp_sys_mac_hardware_51861[(fair, JJ), (number, NN), (of, IN), (brave, J...[fair, number, brave, soul, upgrade, si, clock...[clock_oscillator, please_send, top_speed, hea...comp_sys_mac_hardware_51861 |@lemmatized fair:...
2well folks, my mac plus finally gave up the gh.../home/bulatov/scikit_learn_data/20news_home/20...4comp_sys_mac_hardware_51879[(well, RB), (folks, NNS), (my, PRP$), (mac, J...[well, folk, mac, plus, finally, give, ghost, ...[mac_plus, life_way, way_back, market_new, new...comp_sys_mac_hardware_51879 |@lemmatized well:...
3\\nDo you have Weitek's address/phone number? .../home/bulatov/scikit_learn_data/20news_home/20...1comp_graphics_38242[(do, VBP), (you, PRP), (have, VB), (weitek, V...[weitek, address, phone, number, like, get, in...[address_phone, phone_number, number_like, lik...comp_graphics_38242 |@lemmatized weitek:1 addr...
4From article <C5owCB.n3p@world.std.com>, by to.../home/bulatov/scikit_learn_data/20news_home/20...14sci_space_60880[(from, IN), (article, NN), (by, IN), (tom, NN...[article, tom, baker, understanding, expected,...[system_software, thing_check, introduce_new, ...sci_space_60880 |@lemmatized article:1 tom:1 b...
\n", "
" ], "text/plain": [ " raw_text \\\n", "0 I was wondering if anyone out there could enli... \n", "1 A fair number of brave souls who upgraded thei... \n", "2 well folks, my mac plus finally gave up the gh... \n", "3 \\nDo you have Weitek's address/phone number? ... \n", "4 From article , by to... \n", "\n", " filenames target \\\n", "0 /home/bulatov/scikit_learn_data/20news_home/20... 7 \n", "1 /home/bulatov/scikit_learn_data/20news_home/20... 4 \n", "2 /home/bulatov/scikit_learn_data/20news_home/20... 4 \n", "3 /home/bulatov/scikit_learn_data/20news_home/20... 1 \n", "4 /home/bulatov/scikit_learn_data/20news_home/20... 14 \n", "\n", " id \\\n", "0 rec_autos_102994 \n", "1 comp_sys_mac_hardware_51861 \n", "2 comp_sys_mac_hardware_51879 \n", "3 comp_graphics_38242 \n", "4 sci_space_60880 \n", "\n", " tokenized \\\n", "0 [(was, VBD), (wondering, VBG), (if, IN), (anyo... \n", "1 [(fair, JJ), (number, NN), (of, IN), (brave, J... \n", "2 [(well, RB), (folks, NNS), (my, PRP$), (mac, J... \n", "3 [(do, VBP), (you, PRP), (have, VB), (weitek, V... \n", "4 [(from, IN), (article, NN), (by, IN), (tom, NN... \n", "\n", " lemmatized \\\n", "0 [wonder, anyone, could, enlighten, car, saw, d... \n", "1 [fair, number, brave, soul, upgrade, si, clock... \n", "2 [well, folk, mac, plus, finally, give, ghost, ... \n", "3 [weitek, address, phone, number, like, get, in... \n", "4 [article, tom, baker, understanding, expected,... \n", "\n", " bigram \\\n", "0 [wonder_anyone, anyone_could, sport_car, car_l... \n", "1 [clock_oscillator, please_send, top_speed, hea... \n", "2 [mac_plus, life_way, way_back, market_new, new... \n", "3 [address_phone, phone_number, number_like, lik... \n", "4 [system_software, thing_check, introduce_new, ... \n", "\n", " vw_text \n", "0 rec_autos_102994 |@lemmatized wonder:1 anyone:... \n", "1 comp_sys_mac_hardware_51861 |@lemmatized fair:... \n", "2 comp_sys_mac_hardware_51879 |@lemmatized well:... \n", "3 comp_graphics_38242 |@lemmatized weitek:1 addr... \n", "4 sci_space_60880 |@lemmatized article:1 tom:1 b... " ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "num bad entries 0\n", "[]\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
raw_textfilenamestargetidtokenizedlemmatizedbigramvw_text
0I am a little confused on all of the models of.../home/bulatov/scikit_learn_data/20news_home/20...7rec.autos.103343[(am, VBP), (little, JJ), (confused, VBN), (on...[little, confuse, model, bonnevilles, hear, le...[could_someone, someone_tell, tell_difference,...rec.autos.103343 |@lemmatized little:1 confuse...
1I'm not familiar at all with the format of the.../home/bulatov/scikit_learn_data/20news_home/20...5comp.windows.x.67445[(not, RB), (familiar, JJ), (at, IN), (all, DT...[familiar, format, face, thingies, see, folk, ...[get_see, make_one, one_get, seem_find, could_...comp.windows.x.67445 |@lemmatized familiar:1 f...
2\\nIn a word, yes.\\n/home/bulatov/scikit_learn_data/20news_home/20...0alt.atheism.53603[(in, IN), (word, NN), (yes, NN)][word, yes][]alt.atheism.53603 |@lemmatized word:1 yes:1 |...
3\\nThey were attacking the Iraqis to drive them.../home/bulatov/scikit_learn_data/20news_home/20...17talk.politics.mideast.77355[(they, PRP), (were, VBD), (attacking, VBG), (...[attack, iraqi, drive, kuwait, country, whose,...[think_u, saudi_arabia, much_anything, saudi_a...talk.politics.mideast.77355 |@lemmatized attac...
4\\nI've just spent two solid months arguing tha.../home/bulatov/scikit_learn_data/20news_home/20...19talk.religion.misc.84194[(ve, NN), (just, RB), (spent, VBN), (two, CD)...[spend, two, solid, month, argue, thing, objec...[moral_system]talk.religion.misc.84194 |@lemmatized spend:1 ...
\n", "
" ], "text/plain": [ " raw_text \\\n", "0 I am a little confused on all of the models of... \n", "1 I'm not familiar at all with the format of the... \n", "2 \\nIn a word, yes.\\n \n", "3 \\nThey were attacking the Iraqis to drive them... \n", "4 \\nI've just spent two solid months arguing tha... \n", "\n", " filenames target \\\n", "0 /home/bulatov/scikit_learn_data/20news_home/20... 7 \n", "1 /home/bulatov/scikit_learn_data/20news_home/20... 5 \n", "2 /home/bulatov/scikit_learn_data/20news_home/20... 0 \n", "3 /home/bulatov/scikit_learn_data/20news_home/20... 17 \n", "4 /home/bulatov/scikit_learn_data/20news_home/20... 19 \n", "\n", " id \\\n", "0 rec.autos.103343 \n", "1 comp.windows.x.67445 \n", "2 alt.atheism.53603 \n", "3 talk.politics.mideast.77355 \n", "4 talk.religion.misc.84194 \n", "\n", " tokenized \\\n", "0 [(am, VBP), (little, JJ), (confused, VBN), (on... \n", "1 [(not, RB), (familiar, JJ), (at, IN), (all, DT... \n", "2 [(in, IN), (word, NN), (yes, NN)] \n", "3 [(they, PRP), (were, VBD), (attacking, VBG), (... \n", "4 [(ve, NN), (just, RB), (spent, VBN), (two, CD)... \n", "\n", " lemmatized \\\n", "0 [little, confuse, model, bonnevilles, hear, le... \n", "1 [familiar, format, face, thingies, see, folk, ... \n", "2 [word, yes] \n", "3 [attack, iraqi, drive, kuwait, country, whose,... \n", "4 [spend, two, solid, month, argue, thing, objec... \n", "\n", " bigram \\\n", "0 [could_someone, someone_tell, tell_difference,... \n", "1 [get_see, make_one, one_get, seem_find, could_... \n", "2 [] \n", "3 [think_u, saudi_arabia, much_anything, saudi_a... \n", "4 [moral_system] \n", "\n", " vw_text \n", "0 rec.autos.103343 |@lemmatized little:1 confuse... \n", "1 comp.windows.x.67445 |@lemmatized familiar:1 f... \n", "2 alt.atheism.53603 |@lemmatized word:1 yes:1 |... \n", "3 talk.politics.mideast.77355 |@lemmatized attac... \n", "4 talk.religion.misc.84194 |@lemmatized spend:1 ... " ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "train_pd = do_vw_for_me_please(train_pd)\n", "display(train_pd.head())\n", "\n", "test_pd = do_vw_for_me_please(test_pd)\n", "display(test_pd.head())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Saving to disk (TopicNet's [Dataset](https://github.com/machine-intelligence-laboratory/TopicNet/blob/master/topicnet/cooking_machine/dataset.py) can be constructed using saved .csv file with text data)." ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ "! mkdir 20_News_dataset\n", "\n", "train_pd.drop(bad_indices).to_csv('/data/datasets/20_News_dataset/train_preprocessed.csv')\n", "test_pd.to_csv('/data/datasets/20_News_dataset/test_preprocessed.csv')" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 2 }