Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Widgets Demo
IPython widgets allow you to quickly and easily create interactive APIs with Python.
To run this notebook, you'll first have to install ipywidgets using, e.g.
$ conda install ipywidgets
You can find a full set of documentation notebooks here.
Troubleshooting
If the widgets below do not show up in the notebook, try closing the notebook and running the following command in your shell
Step1: Specifying Ranges
Using a tuple for the input, we can specify a range for our data
Step2: Interact with Plotting
This can become very powerful when interacting with a plotting command | Python Code:
from ipywidgets import interact
def times_ten(x):
return 10 * x
interact(times_ten, x=10);
interact(times_ten, x='(^_^)')
interact(times_ten, x=True)
Explanation: Widgets Demo
IPython widgets allow you to quickly and easily create interactive APIs with Python.
To run this notebook, you'll first have to install ipywidgets using, e.g.
$ conda install ipywidgets
You can find a full set of documentation notebooks here.
Troubleshooting
If the widgets below do not show up in the notebook, try closing the notebook and running the following command in your shell:
$ jupyter nbextension enable --py widgetsnbextension
Then open the notebook again. This enables the widgets notebook extension in the case that it's disabled.
interact: simple interactive widgets
The main idea of ipywidgets is to allow you to transform simple Python functions into interactive widgets.
For example
End of explanation
interact(times_ten, x=(100, 200))
Explanation: Specifying Ranges
Using a tuple for the input, we can specify a range for our data:
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 10, 1000)
def plot_sine(amplitude, frequency, phase):
y = amplitude * np.sin(frequency * x - phase)
plt.plot(x, y)
plt.ylim(-6, 6)
interact(plot_sine,
amplitude=(0.0, 5.0),
frequency=(0.1, 10),
phase=(-5.0, 5.0));
Explanation: Interact with Plotting
This can become very powerful when interacting with a plotting command:
End of explanation |
501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 4</font>
Download
Step1: ** ATENÇÃO ****
Caso você tenha problemas com acentos nos arquivos
Step2: Usando a expressão with
O método close() é executado automaticamente
Step3: Manipulando Arquivos CSV (comma-separated values )
Step4: Manipulando Arquivos JSON (Java Script Object Notation )
JSON (JavaScript Object Notation) é uma maneira de armazenar informações de forma organizada e de fácil acesso. Em poucas palavras, ele nos dá uma coleção legível de dados que podem ser acessados de forma muito lógica. Pode ser uma fonte de Big Data. | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 4</font>
Download: http://github.com/dsacademybr
End of explanation
texto = "Cientista de Dados é a profissão que mais tem crescido em todo mundo.\n"
texto = texto + "Esses profissionais precisam se especializar em Programação, Estatística e Machine Learning.\n"
texto += "E claro, em Big Data."
print(texto)
# Importando o módulo os
import os
# Criando um arquivo
arquivo = open(os.path.join('arquivos/cientista.txt'),'w')
# Gravando os dados no arquivo
for palavra in texto.split():
arquivo.write(palavra+' ')
# Fechando o arquivo
arquivo.close()
# Lendo o arquivo
arquivo = open('arquivos/cientista.txt','r')
conteudo = arquivo.read()
arquivo.close()
print(conteudo)
Explanation: ** ATENÇÃO ****
Caso você tenha problemas com acentos nos arquivos:
Primeiro, recomendamos a leitura do material sobre Formato Unicode, ao final do capítulo 4.
Uma forma de resolver esse problema, é abrir o arquivo em um editor de texto como o Sublime Text, clicar em File - Save with Encoding e então salvar com encoding UTF-8.
Outra opção é incluir o parâmetro encoding='utf8' ao abrir o arquivo para leitura ou escrita.
Manipulação de Arquivos
Arquivos TXT
Arquivos CSV
Arquivos JSON
Manipulando Arquivos TXT
End of explanation
with open('arquivos/cientista.txt','r') as arquivo:
conteudo = arquivo.read()
print(len(conteudo))
print(conteudo)
with open('arquivos/cientista.txt','w') as arquivo:
arquivo.write(texto[:21])
arquivo.write('\n')
arquivo.write(texto[:33])
# Lendo o arquivo
arquivo = open('arquivos/cientista.txt','r')
conteudo = arquivo.read()
arquivo.close()
print (conteudo)
Explanation: Usando a expressão with
O método close() é executado automaticamente
End of explanation
# Importando o módulo csv
import csv
with open('arquivos/numeros.csv','w') as arquivo:
writer = csv.writer(arquivo)
writer.writerow(('primeira','segunda','terceira'))
writer.writerow((55,93,76))
writer.writerow((62,14,86))
# Leitura de arquivos csv
with open('arquivos/numeros.csv','r') as arquivo:
leitor = csv.reader(arquivo)
for x in leitor:
print ('Número de colunas:', len(x))
print(x)
# Código alternativo para eventuais problemas com linhas em branco no arquivo
with open('arquivos/numeros.csv','r', encoding='utf8', newline = '\r\n') as arquivo:
leitor = csv.reader(arquivo)
for x in leitor:
print ('Número de colunas:', len(x))
print(x)
# Gerando uma lista com dados do arquivo csv
with open('arquivos/numeros.csv','r') as arquivo:
leitor = csv.reader(arquivo)
dados = list(leitor)
print (dados)
# Impriminfo a partir da segunda linha
for linha in dados[1:]:
print (linha)
Explanation: Manipulando Arquivos CSV (comma-separated values )
End of explanation
# Criando um dicionário
dict = {'nome': 'Guido van Rossum',
'linguagem': 'Python',
'similar': ['c','Modula-3','lisp'],
'users': 1000000}
for k,v in dict.items():
print (k,v)
# Importando o módulo Json
import json
# Convertendo o dicionário para um objeto json
json.dumps(dict)
# Criando um arquivo Json
with open('arquivos/dados.json','w') as arquivo:
arquivo.write(json.dumps(dict))
# Leitura de arquivos Json
with open('arquivos/dados.json','r') as arquivo:
texto = arquivo.read()
data = json.loads(texto)
print (data)
print (data['nome'])
# Imprimindo um arquivo Json copiado da internet
from urllib.request import urlopen
response = urlopen("http://vimeo.com/api/v2/video/57733101.json").read().decode('utf8')
data = json.loads(response)[0]
print ('Título: ', data['title'])
print ('URL: ', data['url'])
print ('Duração: ', data['duration'])
print ('Número de Visualizações: ', data['stats_number_of_plays'])
# Copiando o conteúdo de um arquivo para outro
import os
arquivo_fonte = 'arquivos/dados.json'
arquivo_destino = 'arquivos/json_data.txt'
# Método 1
with open(arquivo_fonte,'r') as infile:
text = infile.read()
with open(arquivo_destino,'w') as outfile:
outfile.write(text)
# Método 2
open(arquivo_destino,'w').write(open(arquivo_fonte,'r').read())
# Leitura de arquivos Json
with open('arquivos/json_data.txt','r') as arquivo:
texto = arquivo.read()
data = json.loads(texto)
print(data)
Explanation: Manipulando Arquivos JSON (Java Script Object Notation )
JSON (JavaScript Object Notation) é uma maneira de armazenar informações de forma organizada e de fácil acesso. Em poucas palavras, ele nos dá uma coleção legível de dados que podem ser acessados de forma muito lógica. Pode ser uma fonte de Big Data.
End of explanation |
502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
general
Step1: fw_ids for user-submitted workflows
Step2: pause controller, defuse/fizzle workflows with >20 nodes
Step3: prioritized user-submitted "Add to SNL" tasks to get duplicate checking done
Step4: percentage of workflows in each state
Step5: list of first fizzled fw_id in each workflow
Step6: list of incomplete fireworks in RUNNING workflows for fworker query
Step7: list of first fireworks for fizzled workflows
Step8: list of uniform tasks/fw_ids for projections in BoltzTraP builder (VASP DB insertion reruns)
Step9: launch directories
for XSEDE
Step10: analyze log output of fizzled workflows
scan for error messages
Step11: categorize errors
Step12: debugging
Step13: Kitchaev submissions
Step14: fw_ids for list of mp-ids to fix DOS offset
Step15: Projections for BoltzTraP builder
Step16: SNL and Task Collections for atomate transition | Python Code:
user_remarks = [
"new ICSD batch", "Pauling file", "Heusler ABC2 phases",
"proton conducting materials for fuel cells", "solid solution metal", "solid solution oxide", "intermetallic",
"CNGMD Nitrides", "MAGICS calculation of band structures of 2D TMDC stacked heterostructures"
]
Explanation: general: fizzled workflows and according list of fireworks
End of explanation
#user_query = {"spec.task_type": "Add to SNL database", "spec.snl.about.remarks": "MP user submission"}
user_query = {"spec.task_type": "Add to SNL database", "spec.snl.about.remarks": user_remarks[-3]}
fw_ids_user = lpad.fireworks.find(user_query, {'fw_id': 1, '_id': 0}).distinct('fw_id')
print(len(fw_ids_user), 'user-submitted workflows')
Explanation: fw_ids for user-submitted workflows
End of explanation
counter = Counter()
for root_fw_id in fw_ids_user:
# print(root_fw_id)
wflows = list(lpad.workflows.find({'nodes': root_fw_id}, ['nodes', 'state', 'links']))
if len(wflows) > 1:
print('\tmultiple workflows for', root_fw_id)
continue
wf = wflows[0]
fws = {}
for fw in lpad.fireworks.find(
{'fw_id': {'$in': wf['nodes']}}, {'fw_id': 1, 'spec.task_type': 1, 'state': 1, '_id': 0}
):
fw_id = fw.pop('fw_id')
fws[fw_id] = fw
for fw_id, fw in fws.items():
# pause controller tasks (problems with band structure calcs)
# if fw['spec']['task_type'] == 'Controller: add Electronic Structure v2' and \
# fw['state'] in ['WAITING', 'READY']:
# lpad.pause_fw(fw_id)
# fws[fw_id]['state'] = 'PAUSED'
# print('\tpaused', fw_id)
# defuse workflows with more than 20 tasks (endless SO?)
if wf['state'] != 'COMPLETED' and len(wf['nodes']) > 20 and \
fw['state'] not in ['COMPLETED', 'DEFUSED', 'PAUSED']:
try:
lpad.defuse_fw(fw_id)
fws[fw_id]['state'] = 'DEFUSED'
print('\tdefused', fw_id)
except Exception as ex:
print('\tdefusing', fw_id, 'failed:', str(ex))
lpad.fireworks.update_one({'fw_id': fw_id}, {"$set":{"state":"FIZZLED"}})
print('\t', fw_id, 'set to FIZZLED')
if fws[root_fw_id]['state'] == 'COMPLETED':
current_fw_id = root_fw_id
while 1:
daughters = wf['links'][str(current_fw_id)]
if not daughters:
raise ValueError('why did I get here?')
if len(daughters) == 1:
#print('daughter:', current_fw_id, daughters[0], fws[daughters[0]]['spec']['task_type'])
if fws[daughters[0]]['spec']['task_type'] == 'Controller: add Electronic Structure v2':
counter[fws[current_fw_id]['state']] += 1
break
else:
current_fw_id = daughters[0]
else:
so_task_found = False
for daughter in daughters:
if fws[daughter]['spec']['task_type'] == 'GGA optimize structure (2x)':
current_fw_id = daughter
so_task_found = True
break
if not so_task_found:
raise ValueError('SO task not found!')
else:
counter[fws[root_fw_id]['state']] += 1
print(counter)
print('total =', sum(counter.values()))
vw_fws = {}
for state in states:
vw_fws[state] = list(lpad.fireworks.find({
"state": state, "$or": [
{"spec.snl.about.remarks": "solid solution metal"},
{"spec.mpsnl.about.remarks": "solid solution metal"}
]
}, ['spec.task_type', 'fw_id']))
if vw_fws[state]:
print(state, len(vw_fws[state]))
if state in ['RUNNING', 'READY', 'RESERVED']:
print(Counter(fw['spec']['task_type'] for fw in vw_fws[state]))
Explanation: pause controller, defuse/fizzle workflows with >20 nodes
End of explanation
priority_user_query = {
"$and": [
{"$or": [
{"spec.snl.about.remarks": {"$in": ["MP user submission"], "$nin": user_remarks}},
{"spec.mpsnl.about.remarks": {"$in": ["MP user submission"], "$nin": user_remarks}},
]}, {"$or": [
{"spec.prev_vasp_dir": {"$exists": 0}},
{"spec.prev_vasp_dir": {"$regex": "/oasis/"}},
]}
]
}
priority_user_fws = {}
for state in states:
if state == 'READY':
state_query = {'state': state}
state_query.update(priority_user_query)
priority_user_fws[state] = list(lpad.fireworks.find(state_query, {
"fw_id": 1, "spec.task_type": 1, "spec.prev_vasp_dir": 1, "_id": 0}))
nr_fws = len(priority_user_fws[state])
if nr_fws > 0:
add_to_snl = []
for d in priority_user_fws[state]:
if d['spec']['task_type'] == 'Add to SNL database':
add_to_snl.append(d['fw_id'])
print(' '.join(map(str, add_to_snl)))
print('{} {} user-submitted XSEDE tasks'.format(nr_fws, state))
print('DONE')
Explanation: prioritized user-submitted "Add to SNL" tasks to get duplicate checking done
End of explanation
# 118151 = {Ti,Zr,Hf}-Zn-N piezoelectricity study -> ALL COMPLETED 2017-01-24
# 114781 = Kitchaev Workflows
# 115780 = Heusler ABC2 phases
# 89070 = Silvana Botti Perovskite Structures
submission_group_id = 89070
query = {'nodes': {'$in': fw_ids_user}}
# if user_query["spec.snl.about.remarks"] == "MP user submission":
# print('FYI: only looking at workflows with submission_group_id', submission_group_id)
# query.update({'metadata.submission_group_id': submission_group_id})
wflows = {}
total_wflows = float(lpad.workflows.find(query).count())
wflows_projection = {'fw_states': 1, 'parent_links': 1, 'links': 1, 'nodes': 1, '_id': 0, 'state': 1}
for state in states:
state_query = {'state': state}
state_query.update(query)
wflows[state] = list(lpad.workflows.find(state_query, wflows_projection))
nr_wflows = len(wflows[state])
if nr_wflows > 0:
if state == 'FIZZLED':
print([wf['nodes'][0] for wf in wflows[state]])
wflows_fraction = nr_wflows / total_wflows
print('{} {} workflows ({:.1f}%)'.format(nr_wflows, state, wflows_fraction*100.))
print(int(total_wflows), 'workflows in total')
Explanation: percentage of workflows in each state
End of explanation
def find_root_node(wflow):
# wflow['nodes'][0] is not necessarily the root node!
parent_links_keys = wflow['parent_links'].keys()
for node in wflow['nodes']:
if str(node) in parent_links_keys:
continue
return node
state = 'FIZZLED' # workflow state
rerun_fws = []
fw_ids_state = {}
for wflow in wflows[state]:
root_fw_id = find_root_node(wflow)
# decend links until fizzled firework found
fw_id = root_fw_id
check_states = [state] if state != 'RUNNING' else ['READY', 'RESERVED']
while 1:
current_state = wflow['fw_states'][str(fw_id)]
if current_state == 'RUNNING':
print(fw_id, 'is RUNNING -> probably need to do `lpad rerun_fws -i {}`'.format(fw_id))
break
if current_state in check_states:
task_type = lpad.fireworks.find_one({'fw_id': fw_id}, {'spec.task_type': 1})['spec']['task_type']
if task_type not in fw_ids_state:
fw_ids_state[task_type] = [int(fw_id)]
else:
fw_ids_state[task_type].append(int(fw_id))
alt_state = lpad.fireworks.find_one({'fw_id': fw_id}, {'state': 1, '_id': 0})['state']
if alt_state == 'RESERVED':
rerun_fws.append(str(fw_id))
break
# if multiple children use non-waiting fw
children = wflow['links'][str(fw_id)]
for child in children:
if wflow['fw_states'][str(child)] != 'WAITING':
fw_id = child
if rerun_fws:
print('lpad rerun_fws -i', ' '.join(rerun_fws))
for k,v in fw_ids_state.items():
#if 'GGA' not in k: continue
print(k, v)
# for fw_id in v:
# launches = lpad.launches.find({'fw_id': fw_id}, {'launch_dir': 1})
# for launch in launches:
# if not 'oasis' in launch['launch_dir']:
# print ('\t', fw_id, launch['launch_dir'])
Explanation: list of first fizzled fw_id in each workflow
End of explanation
fw_ids_incomplete = {}
for wflow in wflows['RUNNING']:
for fw_id, fw_state in wflow['fw_states'].items():
if fw_state != 'COMPLETED':
if fw_state not in fw_ids_incomplete:
fw_ids_incomplete[fw_state] = [int(fw_id)]
else:
fw_ids_incomplete[fw_state].append(int(fw_id))
print(fw_ids_incomplete)
nodes = []
for d in lpad.workflows.find({'nodes': {'$in':[1370872,1566138,1566120,1566104,1566099,1567504,1567491,1563287,1652717]}}, {'_id': 0, 'nodes': 1}):
nodes += d['nodes']
print(nodes)
Explanation: list of incomplete fireworks in RUNNING workflows for fworker query
End of explanation
query = {'fw_id': {'$in': [fw_id for fw_id in fw_ids_state.values()]}} # FIXME
projection = {'fw_id': 1, 'launches': 1, '_id': 0}
fws = list(lpad.fireworks.find(query, projection))
assert(len(fws) == len(wflows[state]))
Explanation: list of first fireworks for fizzled workflows
End of explanation
with open('task_fw_ids_wBS.json', 'r') as f:
task_fw_ids_wBS = json.loads(f.read())
print(len(task_fw_ids_wBS), 'tasks already checked for projections')
vasp_fw_ids = []
for fw_id in task_fw_ids_wBS.itervalues():
wf = lpad.workflows.find_one({'nodes': fw_id}, {'_id': 0, 'links': 1})
for daughter in wf['links'][str(fw_id)]:
fw = lpad.fireworks.find_one(
{'fw_id': daughter, 'spec.task_type': 'VASP db insertion'}, {'fw_id': 1, '_id': 0}
)
if fw:
vasp_fw_ids.append(fw['fw_id'])
break
len(vasp_fw_ids)
lpad.fireworks.update_many(
{'fw_id': {'$in': vasp_fw_ids}},
{'$unset' : {'spec._tasks.0.update_duplicates' : 1}}
).raw_result
print(
lpad.fireworks.find({'state': 'READY', 'spec.task_type': 'VASP db insertion'}).count(),
'VASP db insertion tasks ready to run'
)
with open('task_fw_ids_woBS.json', 'r') as f:
task_fw_ids_woBS = json.loads(f.read())
print(len(task_fw_ids_woBS), 'tasks without BS')
fws = lpad.fireworks.find(
{'fw_id': {'$in': task_fw_ids_woBS.values()}, 'state': 'COMPLETED'},
{'launches': 1, 'fw_id': 1, '_id': 0}
)
print('{}/{} fireworks found'.format(fws.count(), len(task_fw_ids_woBS)))
Explanation: list of uniform tasks/fw_ids for projections in BoltzTraP builder (VASP DB insertion reruns)
End of explanation
fws_info = {}
no_launches_found = []
for fw in fws:
if not fw['launches']:
no_launches_found.append(fw['fw_id'])
continue
launch_id = fw['launches'][-1]
launch = lpad.launches.find_one({'launch_id': launch_id}, {'launch_dir': 1, '_id': 0})
launch_dir = launch['launch_dir']
launch_dir_exists = False
for fw_id, fw_info in fws_info.items():
if launch_dir == fw_info['launch_dir']:
launch_dir_exists = True
break
if launch_dir_exists:
if 'duplicates' in fws_info[fw_id]:
fws_info[fw_id]['duplicates'].append(fw['fw_id'])
else:
fws_info[fw_id]['duplicates'] = [fw['fw_id']]
continue
fws_info[fw['fw_id']] = {'launch_dir': launch_dir.strip()}
if len(no_launches_found) > 0:
print('launches not found for', len(no_launches_found), 'fireworks')
nr_duplicates = 0
for fw_id, fw_info in fws_info.iteritems():
if 'duplicates' in fw_info:
nr_duplicates += len(fw_info['duplicates'])
print(nr_duplicates, '/', len(fws), 'workflows have duplicate launch_dirs =>',
len(fws)-nr_duplicates, 'unique launch_dirs')
def get_dest_blocks(s):
a = s.strip().split('/block_')
if len(a) == 2:
return [a[0], 'block_'+a[1]]
a = s.strip().split('/launcher_')
return [a[0], 'launcher_'+a[1]]
def parse_launchdirs():
for fw_id, fw_info in fws_info.iteritems():
launch_dir = fw_info['launch_dir']
if not os.path.exists(launch_dir):
dest, block = get_dest_blocks(launch_dir)
launch_dir = os.path.join(GARDEN, block)
fw_info['launch_dir'] = launch_dir if os.path.exists(launch_dir) else None
# 'compgen -G "$i/*.out" >> ~/launchdirs_exist_outfiles.txt; '
# 'compgen -G "$i/*.error" >> ~/launchdirs_exist_outfiles.txt; '
print('found {}/{} launch directories'.format(
sum([bool(fw_info['launch_dir']) for fw_info in fws_info.itervalues()]), len(fws_info)
))
parse_launchdirs()
Explanation: launch directories
for XSEDE: rsync to Mendel from
/oasis/projects/nsf/csd436/phuck/garden
/oasis/scratch/comet/phuck/temp_project
rsync -avz block_* mendel:/global/projecta/projectdirs/matgen/garden/
End of explanation
def get_file_path(extension, dirlist):
for fstr in dirlist:
fn, ext = os.path.splitext(os.path.basename(fstr))
if fn+ext == 'vasp.out':
continue
if ext == extension:
return fstr
return None
def scan_errors_warnings(f):
for line in f.readlines():
line_lower = line.strip().lower()
if 'error:' in line_lower or 'warning:' in line_lower:
return line.strip()
for fw_id, fw_info in tqdm(fws_info.items()):
fw_info['errors'] = []
if 'remote_dir' not in fw_info:
fw_info['errors'].append('remote_dir not found')
continue
local_dir = fw_info['local_dir']
if not os.path.exists(local_dir):
fw_info['errors'].append('local_dir not found')
continue
ls = glob.glob(os.path.join(local_dir, '*'))
if not ls:
fw_info['errors'].append('no files found in local_dir')
continue
error_file = get_file_path('.error', ls)
if error_file is not None:
# look for a traceback in *.error
with open(error_file, 'r') as f:
fcontent = f.read()
match = re.search('Traceback((.+\n)+)Traceback', fcontent)
if not match:
match = re.search('Traceback((.+\n)+)INFO', fcontent)
if not match:
match = re.search('Traceback((.+\n)+)$', fcontent)
if match:
fw_info['errors'].append('Traceback'+match.group(1))
else:
scan = scan_errors_warnings(f)
if scan:
fw_info['errors'].append(scan)
# look into .out file
out_file = get_file_path('.out', ls)
with open(out_file, 'r') as f:
scan = scan_errors_warnings(f)
if scan:
fw_info['errors'].append(scan)
# look into vasp.out
vasp_out = os.path.join(local_dir, 'vasp.out')
if os.path.exists(vasp_out):
with open(vasp_out, 'r') as f:
vasp_out_tail = f.readlines()[-1].strip()
fw_info['errors'].append(' -- '.join(['vasp.out', vasp_out_tail]))
# FIXME .out and .error for non-reservation mode one directory up
Explanation: analyze log output of fizzled workflows
scan for error messages
End of explanation
def add_fw_to_category(fw_id, key, cats):
if key in cats:
cats[key].append(fw_id)
else:
cats[key] = [fw_id]
categories = {}
for fw_id, fw_info in fws_info.iteritems():
if not fw_info['errors']:
add_fw_to_category(fw_id, 'no errors parsed', categories)
continue
for error in fw_info['errors']:
if 'remote_dir' in error or 'local_dir' in error:
add_fw_to_category(fw_id, error, categories)
elif error.startswith('Traceback'):
exc = ParsedException.from_string(error)
msg = exc.exc_msg[:50]
match = re.search('errors reached: (.*)', msg)
if match:
msg = match.group(1)
key = ' -- '.join([exc.exc_type, msg])
lineno = exc.frames[-1]['lineno']
key = ' -- '.join([key, os.path.basename(exc.source_file) + '#' + lineno])
add_fw_to_category(fw_id, key, categories)
else:
match = re.search('{(.*)}', error) # matches dictionary
if match:
dstr = '{' + match.group(1) + '}'
dstr = dstr.replace("u'", '"').replace("'", '"')
dstr = re.sub('{"handler": (.*), "errors"', '{"handler": "\g<1>", "errors"', dstr)
try:
d = json.loads(dstr)
except:
add_fw_to_category(fw_id, 'looks like dict but could not decode', categories)
else:
if 'handler' in d and 'errors' in d:
if '<' in d['handler']:
match = re.search('custodian\.vasp\.handlers\.(.*) object', d['handler'])
if match:
d['handler'] = match.group(1)
else:
raise ValueError('custodian.vasp.handlers not matched')
add_fw_to_category(fw_id, d['handler'], categories)
elif 'action' in d:
add_fw_to_category(fw_id, 'action', categories)
else:
add_fw_to_category(fw_id, 'found dict but not handler or action error', categories)
else:
add_fw_to_category(fw_id, error, categories)
break # only look at first error
print_categories(categories)
Explanation: categorize errors
End of explanation
fws_info[1564191]['remote_dir']
lpad.fireworks.find_one({'fw_id': 1564191}, {'spec._priority': 1, 'state': 1})
lpad.fireworks.find_one({'fw_id': 1285769}, {'spec._priority': 1, 'state': 1})
lpad.fireworks.find_one({'fw_id': 1399045}, {'spec._priority': 1, 'state': 1})
Explanation: debugging
End of explanation
f = open('mpcomplete_kitchaev.json', 'r')
import json
d = json.load(f)
def find_last_node(wflow):
for node in wflow['links'].keys():
if not wflow['links'][node]:
return node
raise ValueError('last node not found!')
for cif, info in d.items():
submission_id = info['submission_id']
wflow = lpad.workflows.find_one({'metadata.submission_id': submission_id}, wflows_projection)
if wflow['state'] != 'COMPLETED':
continue
fw_id = find_root_node(wflow)
task_ids = [None]
while 1:
launch_id = lpad.fireworks.find_one({'fw_id': fw_id}, {'launches': 1, '_id': 0})['launches'][-1]
launch = lpad.launches.find_one(
{'launch_id': launch_id, 'action.stored_data.task_id': {'$exists': 1}},
{'action.stored_data.task_id': 1, '_id': 0}
)
if launch:
task_ids.append(launch['action']['stored_data']['task_id'])
children = wflow['links'][str(fw_id)]
if not children:
break
fw_id = children[-1]
mat = db_jp.materials.find_one({'task_ids': {'$in': task_ids}}, {'task_id': 1, 'task_ids': 1, '_id': 0})
info['fw_id'] = fw_id
info['mp_id'] = mat['task_id']
print(d[cif])
#break
print('DONE')
fout = open('mpcomplete_kitchaev_mpids.json', 'w')
json.dump(d, fout)
Explanation: Kitchaev submissions: mp-ids
End of explanation
# mp_ids = ['mp-27187','mp-695866','mp-25732','mp-770957','mp-770953','mp-685168','mp-672214','mp-561549','mp-679630',
# 'mp-7323','mp-772665','mp-17895','mp-770566','mp-25772','mp-3009','mp-625837','mp-12797','mp-28588',
# 'mp-770887','mp-776836','mp-5185','mp-24570','mp-723049','mp-657176','mp-25766','mp-19548','mp-625823',
# 'mp-684950','mp-557613','mp-704536','mp-722237','mp-676950']
mp_ids = ['mp-5229']
snlgroup_ids = db_jp.materials.find({'task_ids': {'$in': mp_ids}}).distinct('snlgroup_id')
fw_ids_dosfix = lpad.fireworks.find({"spec.snlgroup_id": {'$in': snlgroup_ids}}).distinct('fw_id')
wflows_dosfix = list(lpad.workflows.find({'nodes': {'$in': fw_ids_dosfix}}))
fw_ids_rerun = []
fw_ids_defuse = []
task_ids = set()
for wflow in wflows_dosfix:
print('wf:', wflow['nodes'][0])
fw_ids_uniform = []
for fw in list(lpad.fireworks.find({'fw_id': {'$in': wflow['nodes']}})):
if 'Uniform' in fw['spec']['task_type']:
fw_ids_uniform.append(fw['fw_id'])
elif 'Boltztrap' in fw['spec']['task_type']:
fw_ids_defuse.append(fw['fw_id'])
elif 'VASP db' in fw['spec']['task_type']:
print(fw['fw_id'], fw['launches'][-1])
launch = lpad.launches.find_one({'launch_id': fw['launches'][-1]}, {'_id': 0, 'action.stored_data': 1})
task_ids.add(launch['action']['stored_data'].get('task_id'))
if not fw_ids_uniform:
continue
fw_ids_rerun.append(max(fw_ids_uniform))
len(fw_ids_rerun)
fw_ids_rerun
task_ids
' '.join(map(str, fw_ids_rerun))
fw_ids_defuse
fw_ids_run = []
for wflow in lpad.workflows.find({'nodes': {'$in': fw_ids_rerun}}):
for fw_id, fw_state in wflow['fw_states'].items():
if fw_state != 'COMPLETED' and fw_state != 'DEFUSED':
fw_ids_run.append(fw_id)
','.join(map(str, fw_ids_run))
' '.join(map(str, fw_ids_defuse))
fw_ids_dos_offset = []
for doc in list(lpad.workflows.find({'nodes': {'$in': fw_ids_gga}}, {'fw_states': 1, '_id': 0})):
for fw_id, fw_state in doc['fw_states'].items():
if fw_state == 'READY' or fw_state == 'WAITING':
fw_ids_dos_offset.append(fw_id)
len(fw_ids_dos_offset)
map(int, fw_ids_dos_offset)
Explanation: fw_ids for list of mp-ids to fix DOS offset
End of explanation
fw_ids_vasp_db_rerun = []
for fw_id, fw_info in fws_info.items():
if fw_info['launch_dir']: # GGA Uniform launch_dir exists
wf = lpad.workflows.find_one({'nodes': fw_id}, {'_id': 0, 'links': 1})
for daughter in wf['links'][str(fw_id)]:
fw = lpad.fireworks.find_one(
{'fw_id': daughter, 'spec.task_type': 'VASP db insertion'}, {'fw_id': 1, '_id': 0}
)
if fw:
fw_ids_vasp_db_rerun.append(fw['fw_id'])
break
len(fw_ids_vasp_db_rerun)
lpad.fireworks.update_many(
{'fw_id': {'$in': fw_ids_vasp_db_rerun}},
{"$set":{"state":"READY", "spec._tasks.0.update_duplicates": True}}
).raw_result
Explanation: Projections for BoltzTraP builder: set to READY and update_duplicates
End of explanation
with open('snl_tasks_atomate.json', 'r') as f:
data = json.load(f)
query = {} if not data else {'task_id': {'$nin': data.keys()}}
has_bs_piezo_dos = {'has_bandstructure': True, 'piezo': {'$exists': 1}, 'dos': {'$exists': 1}}
#query.update(has_bs_piezo_dos)
has_bs_dos = {'has_bandstructure': True, 'dos': {'$exists': 1}}
query.update(has_bs_dos)
docs = db_jp.materials.find(query, {'task_ids': 1, '_id': 0, 'task_id': 1, 'snl.snl_id': 1})
for idx,doc in tqdm(enumerate(docs), total=docs.count()):
mpid = doc['task_id']
data[mpid] = {'tasks': {}}
if set(has_bs_piezo_dos.keys()).issubset(query.keys()):
data[mpid]['tags'] = ['has_bs_piezo_dos']
if set(has_bs_dos.keys()).issubset(query.keys()):
data[mpid]['tags'] = ['has_bs_dos']
for task_id in doc['task_ids']:
tasks = list(db_vasp.tasks.find({'task_id': task_id}, {'dir_name': 1, '_id': 0}))
if len(tasks) > 1:
data[mpid]['error'] = 'found {} tasks'.format(len(tasks))
continue
elif not tasks:
data[mpid]['error'] = 'no task found'
continue
dir_name = tasks[0]['dir_name']
launch_dir = os.path.join(GARDEN, dir_name)
if not os.path.exists(launch_dir):
data[mpid]['error'] = '{} not found'.format(dir_name)
break
data[mpid]['tasks'][task_id] = launch_dir
data[mpid]['snl_id'] = doc['snl']['snl_id']
if not idx%2000:
with open('snl_tasks_atomate.json', 'w') as f:
json.dump(data, f)
#break
with open('snl_tasks_atomate.json', 'w') as f:
json.dump(data, f)
Explanation: SNL and Task Collections for atomate transition
End of explanation |
503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content-Based Filtering Using Neural Networks
This notebook relies on files created in the content_based_preproc.ipynb notebook. Be sure to run the code in there before completing this notebook.
Also, you'll be using the python3 kernel from here on out so don't forget to change the kernel if it's still Python2.
Learning objectives
This notebook illustrates
Step1: Let's make sure you install the necessary version of tensorflow-hub. After doing the pip install below, click "Restart the kernel" on the notebook so that the Python environment picks up the new packages.
Step2: Note
Step3: Build the feature columns for the model
To start, you'll load the list of categories, authors and article ids you created in the previous Create Datasets notebook.
Step4: In the cell below you'll define the feature columns to use in your model. If necessary, remind yourself the various feature columns to use.
For the embedded_title_column feature column, use a Tensorflow Hub Module to create an embedding of the article title. Since the articles and titles are in German, you'll want to use a German language embedding module.
Explore the text embedding Tensorflow Hub modules available here. Filter by setting the language to 'German'. The 50 dimensional embedding should be sufficient for your purposes.
Step5: Create the input function
Next you'll create the input function for your model. This input function reads the data from the csv files you created in the previous notebook.
Step6: Create the model and train/evaluate
Next, you'll build your model which recommends an article for a visitor to the Kurier.at website. Look through the code below. You use the input_layer feature column to create the dense input layer to your network. This is just a single layer network where you can adjust the number of hidden units as a parameter.
Currently, you compute the accuracy between your predicted 'next article' and the actual 'next article' read next by the visitor. You'll also add an additional performance metric of top 10 accuracy to assess your model. To accomplish this, you compute the top 10 accuracy metric, add it to the metrics dictionary below and add it to the tf.summary so that this value is reported to Tensorboard as well.
Step7: Train and Evaluate
Step8: This takes a while to complete but in the end, you will get about 30% top 10 accuracies.
Make predictions with the trained model
With the model now trained, you can make predictions by calling the predict method on the estimator. Let's look at how your model predicts on the first five examples of the training set.
To start, You'll create a new file 'first_5.csv' which contains the first five elements of your training set. You'll also save the target values to a file 'first_5_content_ids' so you can compare your results.
Step9: Recall, to make predictions on the trained model you pass a list of examples through the input function. Complete the code below to make predictions on the examples contained in the "first_5.csv" file you created above.
Step12: Finally, you map the content id back to the article title. Let's compare your model's recommendation for the first example. This can be done in BigQuery. Look through the query below and make sure it is clear what is being returned. | Python Code:
%%bash
pip freeze | grep tensor
Explanation: Content-Based Filtering Using Neural Networks
This notebook relies on files created in the content_based_preproc.ipynb notebook. Be sure to run the code in there before completing this notebook.
Also, you'll be using the python3 kernel from here on out so don't forget to change the kernel if it's still Python2.
Learning objectives
This notebook illustrates:
1. How to build feature columns for a model using tf.feature_column.
2. How to create custom evaluation metrics and add them to Tensorboard.
3. How to train a model and make predictions with the saved model.
Each learning objective will correspond to a #TODO in the notebook, where you will complete the notebook cell's code before running the cell. Refer to the solution notebook for reference.
Tensorflow Hub should already be installed. You can check that it is by using "pip freeze".
End of explanation
!pip3 install tensorflow-hub==0.7.0
!pip3 install --upgrade tensorflow==1.15.3
!pip3 install google-cloud-bigquery==1.10
Explanation: Let's make sure you install the necessary version of tensorflow-hub. After doing the pip install below, click "Restart the kernel" on the notebook so that the Python environment picks up the new packages.
End of explanation
import os
import tensorflow as tf
import numpy as np
import tensorflow_hub as hub
import shutil
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.15.3'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: Note: Please ignore any incompatibility warnings and errors and re-run the cell to view the installed tensorflow version.
End of explanation
categories_list = open("categories.txt").read().splitlines()
authors_list = open("authors.txt").read().splitlines()
content_ids_list = open("content_ids.txt").read().splitlines()
mean_months_since_epoch = 523
Explanation: Build the feature columns for the model
To start, you'll load the list of categories, authors and article ids you created in the previous Create Datasets notebook.
End of explanation
embedded_title_column = hub.text_embedding_column(
key="title",
module_spec="https://tfhub.dev/google/nnlm-de-dim50/1",
trainable=False)
content_id_column = tf.feature_column.categorical_column_with_hash_bucket(
key="content_id",
hash_bucket_size= len(content_ids_list) + 1)
embedded_content_column = tf.feature_column.embedding_column(
categorical_column=content_id_column,
dimension=10)
author_column = tf.feature_column.categorical_column_with_hash_bucket(key="author",
hash_bucket_size=len(authors_list) + 1)
embedded_author_column = tf.feature_column.embedding_column(
categorical_column=author_column,
dimension=3)
category_column_categorical = tf.feature_column.categorical_column_with_vocabulary_list(
key="category",
vocabulary_list=categories_list,
num_oov_buckets=1)
category_column = tf.feature_column.indicator_column(category_column_categorical)
months_since_epoch_boundaries = list(range(400,700,20))
months_since_epoch_column = tf.feature_column.numeric_column(
key="months_since_epoch")
months_since_epoch_bucketized = tf.feature_column.bucketized_column(
source_column = months_since_epoch_column,
boundaries = months_since_epoch_boundaries)
crossed_months_since_category_column = tf.feature_column.indicator_column(tf.feature_column.crossed_column(
keys = [category_column_categorical, months_since_epoch_bucketized],
hash_bucket_size = len(months_since_epoch_boundaries) * (len(categories_list) + 1)))
feature_columns = [embedded_content_column,
embedded_author_column,
category_column,
embedded_title_column,
crossed_months_since_category_column]
Explanation: In the cell below you'll define the feature columns to use in your model. If necessary, remind yourself the various feature columns to use.
For the embedded_title_column feature column, use a Tensorflow Hub Module to create an embedding of the article title. Since the articles and titles are in German, you'll want to use a German language embedding module.
Explore the text embedding Tensorflow Hub modules available here. Filter by setting the language to 'German'. The 50 dimensional embedding should be sufficient for your purposes.
End of explanation
record_defaults = [["Unknown"], ["Unknown"],["Unknown"],["Unknown"],["Unknown"],[mean_months_since_epoch],["Unknown"]]
column_keys = ["visitor_id", "content_id", "category", "title", "author", "months_since_epoch", "next_content_id"]
label_key = "next_content_id"
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column,record_defaults=record_defaults)
features = dict(zip(column_keys, columns))
label = features.pop(label_key)
return features, label
# Create list of files that match pattern
file_list = tf.io.gfile.glob(filename)
# Create dataset from file list
dataset = # TODO 1: Your code here
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
Explanation: Create the input function
Next you'll create the input function for your model. This input function reads the data from the csv files you created in the previous notebook.
End of explanation
def model_fn(features, labels, mode, params):
net = tf.feature_column.input_layer(features, params['feature_columns'])
for units in params['hidden_units']:
net = tf.layers.dense(net, units=units, activation=tf.nn.relu)
# Compute logits (1 per class).
logits = tf.layers.dense(net, params['n_classes'], activation=None)
predicted_classes = tf.argmax(logits, 1)
from tensorflow.python.lib.io import file_io
with file_io.FileIO('content_ids.txt', mode='r') as ifp:
content = tf.constant([x.rstrip() for x in ifp])
predicted_class_names = tf.gather(content, predicted_classes)
if mode == tf.estimator.ModeKeys.PREDICT:
predictions = {
'class_ids': predicted_classes[:, tf.newaxis],
'class_names' : predicted_class_names[:, tf.newaxis],
'probabilities': tf.nn.softmax(logits),
'logits': logits,
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
table = tf.contrib.lookup.index_table_from_file(vocabulary_file="content_ids.txt")
labels = table.lookup(labels)
# Compute loss.
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Compute evaluation metrics.
accuracy = # TODO 2: Your code here
top_10_accuracy = tf.metrics.mean(tf.nn.in_top_k(predictions=logits,
targets=labels,
k=10))
metrics = {
'accuracy': accuracy,
'top_10_accuracy' : top_10_accuracy}
tf.summary.scalar('accuracy', accuracy[1])
tf.summary.scalar('top_10_accuracy', top_10_accuracy[1])
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(
mode, loss=loss, eval_metric_ops=metrics)
# Create training op.
assert mode == tf.estimator.ModeKeys.TRAIN
optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op)
Explanation: Create the model and train/evaluate
Next, you'll build your model which recommends an article for a visitor to the Kurier.at website. Look through the code below. You use the input_layer feature column to create the dense input layer to your network. This is just a single layer network where you can adjust the number of hidden units as a parameter.
Currently, you compute the accuracy between your predicted 'next article' and the actual 'next article' read next by the visitor. You'll also add an additional performance metric of top 10 accuracy to assess your model. To accomplish this, you compute the top 10 accuracy metric, add it to the metrics dictionary below and add it to the tf.summary so that this value is reported to Tensorboard as well.
End of explanation
outdir = 'content_based_model_trained'
shutil.rmtree(outdir, ignore_errors = True) # start fresh each time
#tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
estimator = tf.estimator.Estimator(
model_fn=model_fn,
model_dir = outdir,
params={
'feature_columns': feature_columns,
'hidden_units': [200, 100, 50],
'n_classes': len(content_ids_list)
})
# Provide input data for training
train_spec = tf.estimator.TrainSpec(
input_fn = # TODO 3: Your code here
max_steps = 2000)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset("test_set.csv", tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 30,
throttle_secs = 60)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
Explanation: Train and Evaluate
End of explanation
%%bash
head -5 training_set.csv > first_5.csv
head first_5.csv
awk -F "\"*,\"*" '{print $2}' first_5.csv > first_5_content_ids
Explanation: This takes a while to complete but in the end, you will get about 30% top 10 accuracies.
Make predictions with the trained model
With the model now trained, you can make predictions by calling the predict method on the estimator. Let's look at how your model predicts on the first five examples of the training set.
To start, You'll create a new file 'first_5.csv' which contains the first five elements of your training set. You'll also save the target values to a file 'first_5_content_ids' so you can compare your results.
End of explanation
output = list(estimator.predict(input_fn=read_dataset("first_5.csv", tf.estimator.ModeKeys.PREDICT)))
import numpy as np
recommended_content_ids = [np.asscalar(d["class_names"]).decode('UTF-8') for d in output]
content_ids = open("first_5_content_ids").read().splitlines()
Explanation: Recall, to make predictions on the trained model you pass a list of examples through the input function. Complete the code below to make predictions on the examples contained in the "first_5.csv" file you created above.
End of explanation
from google.cloud import bigquery
recommended_title_sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\"
LIMIT 1.format(recommended_content_ids[0])
current_title_sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) = \"{}\"
LIMIT 1.format(content_ids[0])
recommended_title = bigquery.Client().query(recommended_title_sql).to_dataframe()['title'].tolist()[0].encode('utf-8').strip()
current_title = bigquery.Client().query(current_title_sql).to_dataframe()['title'].tolist()[0].encode('utf-8').strip()
print("Current title: {} ".format(current_title))
print("Recommended title: {}".format(recommended_title))
Explanation: Finally, you map the content id back to the article title. Let's compare your model's recommendation for the first example. This can be done in BigQuery. Look through the query below and make sure it is clear what is being returned.
End of explanation |
504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
# Size of the encoding layer (the hidden layer)
encoding_dim = 32
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs, targets_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Esta será una microentrada para presentar una extensión para el notebook que estoy usando en un curso interno que estoy dando en mi empresa.
Si a alguno más os puede valer para mostrar cosas básicas de Python (2 y 3, además de Java y Javascript) para muy principiantes me alegro.
Nombre en clave
Step1: Una vez hecho esto ya deberiamos tener disponible la cell magic para ser usada
Step2: Ahora un ejemplo con javascript | Python Code:
%load_ext jupytor
Explanation: Esta será una microentrada para presentar una extensión para el notebook que estoy usando en un curso interno que estoy dando en mi empresa.
Si a alguno más os puede valer para mostrar cosas básicas de Python (2 y 3, además de Java y Javascript) para muy principiantes me alegro.
Nombre en clave: Jupytor
Esta extensión lo único que hace es embeber dentro de un IFrame la página de pythontutor usando el código que hayamos definido en una celda de código precedida de la cell magic %%jupytor.
Como he comentado anteriormente, se puede escribir código Python2, Python3, Java y Javascript, que son los lenguajes soportados por pythontutor.
Ejemplo
Primero deberemos instalar la extensión. Está disponible en pypi por lo que la podéis instalar usando pip install jupytor. Una vez instalada, dentro de un notebook de IPython la deberías cargar usando:
End of explanation
%%jupytor --lang python3
a = 1
b = 2
def add(x, y):
return x + y
c = add(a, b)
Explanation: Una vez hecho esto ya deberiamos tener disponible la cell magic para ser usada:
End of explanation
%%jupytor --lang javascript
var a = 1;
var b = 1;
console.log(a + b);
Explanation: Ahora un ejemplo con javascript:
End of explanation |
506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nonlinear Dimensionality Reduction
G. Richards (2016), based on materials from Ivezic, Connolly, Miller, Leighly, and VanderPlas.
Today we will talk about the concepts of
* manifold learning
* nonlinear dimensionality reduction
Specifically using the following algorithms
* local linear embedding (LLE)
* isometric mapping (IsoMap)
* t-distributed Stochastic Neighbor Embedding (t-SNE)
Let's start by my echoing the brief note of caution given in Adam Miller's notebook
Step1: See what LLE does for the digits data, using the 7 nearest neighbors and 2 components.
Step2: Isometric Mapping
is based on multi-dimensional scaling (MDS) framework. It was introduced in the same volume of science as the article above. See Tenenbaum, de Silva, & Langford (2000).
Geodestic curves are used to recover non-linear structure.
In Scikit-Learn IsoMap is implemented as follows
Step3: Try 7 neighbors and 2 dimensions on the digits data.
Step4: t-SNE
t-distributed Stochastic Neighbor Embedding (t-SNE) is not discussed in the book, Scikit-Learn does have a t-SNE implementation and it is well worth mentioning this manifold learning algorithm too. SNE itself was developed by Hinton & Roweis with the "$t$" part being added by van der Maaten & Hinton. It works like the other manifold learning algorithms. Try it on the digits data.
Step5: You'll know if you have done it right if you understand Adam Miller's comment "Holy freakin' smokes. That is magic. (It's possible we just solved science)."
Personally, I think that some exclamation points may be needed in there!
What's even more illuminating is to make the plot using the actual digits to plot the points. Then you can see why certain digits are alike or split into multiple regions. Can you explain the patterns you see here? | Python Code:
import numpy as np
from sklearn.manifold import LocallyLinearEmbedding
X = np.random.normal(size=(1000,2)) # 1000 points in 2D
R = np.random.random((2,10)) # projection matrix
X = np.dot(X,R) # now a 2D linear manifold in 10D space
k = 5 # Number of neighbors to use in fit
n = 2 # Number of dimensions to fit
lle = LocallyLinearEmbedding(k,n)
lle.fit(X)
proj = lle.transform(X) # 100x2 projection of the data
Explanation: Nonlinear Dimensionality Reduction
G. Richards (2016), based on materials from Ivezic, Connolly, Miller, Leighly, and VanderPlas.
Today we will talk about the concepts of
* manifold learning
* nonlinear dimensionality reduction
Specifically using the following algorithms
* local linear embedding (LLE)
* isometric mapping (IsoMap)
* t-distributed Stochastic Neighbor Embedding (t-SNE)
Let's start by my echoing the brief note of caution given in Adam Miller's notebook: "astronomers will often try to derive physical insight from PCA eigenspectra or eigentimeseries, but this is not advisable as there is no physical reason for the data to be linearly and orthogonally separable". Moreover, physical components are (generally) positive definite. So, PCA is great for dimensional reduction, but for doing physics there are generally better choices.
While NMF "solves" the issue of negative components, it is still a linear process. For data with non-linear correlations, an entire field, known as Manifold Learning and nonlinear dimensionality reduction, has been developed, with several algorithms available via the sklearn.manifold module.
For example, if your data set looks like this:
Then PCA is going to give you something like this.
Clearly not very helpful!
What you really want is something more like the results below. For more examples see
Vanderplas & Connolly 2009
Local Linear Embedding
Local Linear Embedding attempts to embed high-$D$ data in a lower-$D$ space. Crucially it also seeks to preserve the geometry of the local "neighborhoods" around each point. In the case of the "S" curve, it seeks to unroll the data. The steps are
Step 1: define local geometry
- local neighborhoods determined from $k$ nearest neighbors.
- for each point calculate weights that reconstruct a point from its $k$ nearest
neighbors via
$$
\begin{equation}
\mathcal{E}_1(W) = \left|X - WX\right|^2,
\end{equation}
$$
where $X$ is an $N\times K$ matrix and $W$ is an $N\times N$ matrix that minimizes the reconstruction error.
Essentially this is finding the hyperplane that describes the local surface at each point within the data set. So, imagine that you have a bunch of square tiles and you are trying to tile the surface with them.
Step 2: embed within a lower dimensional space
- set all $W_{ij}=0$ except when point $j$ is one of the $k$ nearest neighbors of point $i$.
- $W$ becomes very sparse for $k \ll N$ (only $Nk$ entries in $W$ are non-zero).
- minimize
$\begin{equation}
\mathcal{E}_2(Y) = \left|Y - W Y\right|^2,
\end{equation}
$
with $W$ fixed to find an $N$ by $d$ matrix ($d$ is the new dimensionality).
Step 1 requires a nearest-neighbor search.
Step 2 requires an
eigenvalue decomposition of the matrix $C_W \equiv (I-W)^T(I-W)$.
LLE has been applied to data as diverse as galaxy spectra, stellar spectra, and photometric light curves. It was introduced by Roweis & Saul (2000).
Skikit-Learn's call to LLE is as follows, with a more detailed example already being given above.
End of explanation
# Execute this cell to load the digits sample
%matplotlib inline
import numpy as np
from sklearn.datasets import load_digits
from matplotlib import pyplot as plt
digits = load_digits()
grid_data = np.reshape(digits.data[0], (8,8)) #reshape to 8x8
plt.imshow(grid_data, interpolation = "nearest", cmap = "bone_r")
print grid_data
X = digits.data
y = digits.target
#LLE
from sklearn.manifold import LocallyLinearEmbedding
# Complete
Explanation: See what LLE does for the digits data, using the 7 nearest neighbors and 2 components.
End of explanation
# Execute this cell
import numpy as np
from sklearn.manifold import Isomap
XX = np.random.normal(size=(1000,2)) # 1000 points in 2D
R = np.random.random((2,10)) # projection matrix
XX = np.dot(XX,R) # X is a 2D manifold in 10D space
k = 5 # number of neighbors
n = 2 # number of dimensions
iso = Isomap(k,n)
iso.fit(XX)
proj = iso.transform(XX) # 1000x2 projection of the data
Explanation: Isometric Mapping
is based on multi-dimensional scaling (MDS) framework. It was introduced in the same volume of science as the article above. See Tenenbaum, de Silva, & Langford (2000).
Geodestic curves are used to recover non-linear structure.
In Scikit-Learn IsoMap is implemented as follows:
End of explanation
# IsoMap
from sklearn.manifold import Isomap
# Complete
Explanation: Try 7 neighbors and 2 dimensions on the digits data.
End of explanation
# t-SNE
from sklearn.manifold import TSNE
# Complete
Explanation: t-SNE
t-distributed Stochastic Neighbor Embedding (t-SNE) is not discussed in the book, Scikit-Learn does have a t-SNE implementation and it is well worth mentioning this manifold learning algorithm too. SNE itself was developed by Hinton & Roweis with the "$t$" part being added by van der Maaten & Hinton. It works like the other manifold learning algorithms. Try it on the digits data.
End of explanation
# Execute this cell
from matplotlib import offsetbox
#----------------------------------------------------------------------
# Scale and visualize the embedding vectors
def plot_embedding(X):
x_min, x_max = np.min(X, 0), np.max(X, 0)
X = (X - x_min) / (x_max - x_min)
plt.figure()
ax = plt.subplot(111)
for i in range(X.shape[0]):
#plt.text(X[i, 0], X[i, 1], str(digits.target[i]), color=plt.cm.Set1(y[i] / 10.), fontdict={'weight': 'bold', 'size': 9})
plt.text(X[i, 0], X[i, 1], str(digits.target[i]), color=plt.cm.nipy_spectral(y[i]/9.))
shown_images = np.array([[1., 1.]]) # just something big
for i in range(digits.data.shape[0]):
dist = np.sum((X[i] - shown_images) ** 2, 1)
if np.min(dist) < 4e-3:
# don't show points that are too close
continue
shown_images = np.r_[shown_images, [X[i]]]
imagebox = offsetbox.AnnotationBbox(offsetbox.OffsetImage(digits.images[i], cmap=plt.cm.gray_r), X[i])
ax.add_artist(imagebox)
plt.xticks([]), plt.yticks([])
plot_embedding(X_reduced)
plt.show()
Explanation: You'll know if you have done it right if you understand Adam Miller's comment "Holy freakin' smokes. That is magic. (It's possible we just solved science)."
Personally, I think that some exclamation points may be needed in there!
What's even more illuminating is to make the plot using the actual digits to plot the points. Then you can see why certain digits are alike or split into multiple regions. Can you explain the patterns you see here?
End of explanation |
507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 你好,量子世界
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 安装 TensorFlow Quantum:
Step3: 现在,导入 TensorFlow 和模块依赖项:
Step4: 1. 基础知识
1.1 Cirq 和参数化量子电路
在研究 TensorFlow Quantum (TFQ) 之前,我们先来了解 <a target="_blank" href="https
Step5: 下面的代码可以使用您的参数创建一个双量子位电路:
Step6: 要评估电路,您可以使用 cirq.Simulator 接口。通过传入 cirq.ParamResolver 对象,您可以将电路中的自由参数替换为具体的数字。下面的代码可以计算参数化电路的原始状态向量输出:
Step7: 如果脱离模拟环境,您将无法直接访问状态向量(请注意上面的输出中的复杂数字)。为了达到物理真实感,您必须指定一个测量值,将状态向量转换为经典计算机可以理解的实数。Cirq 使用 <a target="_blank" href="https
Step8: 1.2 作为向量的量子电路
TensorFlow Quantum (TFQ) 提供了 tfq.convert_to_tensor,后者是一个可以将 Cirq 对象转换为张量的函数。这样,您可以将 Cirq 对象发送到我们的<a target="_blank" href="https
Step9: 这可以将 Cirq 对象编码为 tf.string 张量,tfq 运算在需要时可以对这些张量进行解码。
Step10: 1.3 电路模拟批处理
TFQ 提供了计算期望值、样本和状态向量的方法。目前,我们先重点介绍期望值。
计算期望值的最高级接口是 tfq.layers.Expectation 层,它是一个 tf.keras.Layer。简而言之,该层等效于通过许多 cirq.ParamResolvers 模拟参数化电路;不过,TFQ 允许对随后的 TensorFlow 语义进行批处理,而电路则使用高效的 C++ 代码进行模拟。
创建一组替代 a 和 b 参数的值:
Step11: 在 Cirq 中通过参数值对电路执行进行批处理需要一个循环:
Step12: TFQ 中简化了同一运算:
Step13: 2. 混合量子-经典优化
现在,您已经了解基础知识。接下来,我们使用 TensorFlow Quantum 构造一个混合量子-经典神经网络。您将训练一个经典神经网络来控制单个量子位。我们将优化控制,以便将量子位正确准备为 0 或 1 状态,克服模拟系统校准误差。下面是架构图:
<img src="./images/nn_control1.png" width="1000">
即使没有神经网络,这也是一个很容易解决的问题,但主题与您使用 TFQ 可能解决的实际量子控制问题类似。它使用 tf.keras.Model 中的 tfq.layers.ControlledPQC(参数化量子电路)层演示了一个端到端量子-经典计算示例。
对于本教程中的实现,此架构分为以下三部分:
输入电路或数据点电路:前三个 $R$ 门。
受控电路:另外三个 $R$ 门。
控制器:设置受控电路参数的经典神经网络。
2.1 受控电路定义
如上图所示,定义可学习的单个位旋转。这与我们的受控电路对应。
Step14: 2.2 控制器
现在,定义控制器网络:
Step15: 给定一批命令,控制器就会输出受控电路的一批控制信号。
控制器是随机初始化的,所以这些输出目前没有用处。
Step16: 2.3 将控制器连接到电路
使用 tfq 将控制器作为单个 keras.Model 连接到受控电路。
请参阅 Keras 函数式 API 指南详细了解这种样式的模型定义。
首先,定义模型的输入:
Step17: 接着,将运算应用到这些输入来定义计算。
Step18: 现在,将此计算打包成 tf.keras.Model:
Step19: 该网络架构如下面的模型图所示。将模型图与架构图进行对比可验证其正确性。
注:可能需要安装了 graphviz 软件包的系统。
Step20: 此模型需要两个输入:控制器的命令,以及控制器尝试纠正其输出的输入电路。
2.4 数据集
该模型会尝试为每个命令的 $\hat{Z}$ 输出正确的测量值。命令和正确值的定义如下。
Step21: 这并不是此任务的完整训练数据集。数据集中的每个数据点还需要一个输入电路。
2.4 输入电路定义
下面的输入电路定义该模型将学习纠正的随机校准误差。
Step22: 电路有两个副本,每个数据点一个。
Step23: 2.5 训练
利用定义的输入,您可以试运行 tfq 模型。
Step24: 现在,请运行标准训练流程,针对 expected_outputs 调整这些值。
Step26: 从此图中您可以看到,神经网络已经学会解决系统校准错误。
2.6 验证输出
现在,使用训练的模型来纠正量子位校准误差。对于 Cirq:
Step27: 在训练期间,损失函数的值可提供模型学习效果的大致情况。损失值越小,以上代码单元中的期望值就越接近 desired_values。如果您不太关心参数值,则随时可以使用 tfq 检查上面的输出:
Step28: 3 学习准备不同算子的本征态
您可以随意将 $\pm \hat{Z}$ 本征态与 1 和 0 对应。但为了简便起见,您可以让 1 与 $+ \hat{Z}$ 本征态对应,而让 0 与 $-\hat{X}$ 本征态对应。一种实现方式是为每个命令指定一个不同的测量算子,如下图所示:
<img src="./images/nn_control2.png" width="1000">
这要求使用 <code>tfq.layers.Expectation</code>。现在,您的输入已经包括三个对象:电路、命令和算子。输出仍为期望值。
3.1 新模型定义
我们来看看完成此任务的模型:
Step29: 下面是控制器网络:
Step30: 使用 tfq 将电路与控制器合并到单个 keras.Model 中:
Step31: 3.2 数据集
现在,对于为 model_circuit 提供的每个数据点,还要包括要测量的算子:
Step32: 3.3 训练
现在,您已经有了新的输入和输出,可以使用 Keras 重新进行训练。
Step33: 损失函数的值已降为零。
controller 可作为独立模型提供。您可以调用控制器,并检查其对每个命令信号的响应。要正确对比这些输出与 random_rotations 的内容,您可能需要花些功夫。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install tensorflow==2.4.1
Explanation: 你好,量子世界
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/quantum/tutorials/hello_many_worlds"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/quantum/tutorials/hello_many_worlds.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/quantum/tutorials/hello_many_worlds.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/quantum/tutorials/hello_many_worlds.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
本教程介绍经典神经网络如何学习纠正量子位校准误差。其中介绍了 <a target="_blank" href="https://github.com/quantumlib/Cirq" class="external">Cirq</a>(一个用于创建、编辑和调用嘈杂中型量子 (NISQ) 电路的 Python 框架),并演示了 Cirq 如何通过接口与 TensorFlow Quantum 连接。
设置
End of explanation
!pip install tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
Explanation: 安装 TensorFlow Quantum:
End of explanation
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
Explanation: 现在,导入 TensorFlow 和模块依赖项:
End of explanation
a, b = sympy.symbols('a b')
Explanation: 1. 基础知识
1.1 Cirq 和参数化量子电路
在研究 TensorFlow Quantum (TFQ) 之前,我们先来了解 <a target="_blank" href="https://github.com/quantumlib/Cirq" class="external">Cirq</a> 的一些基础知识。Cirq 是 Google 开发的一个用于量子计算的 Python 库。您可以使用它定义电路,包括静态和参数化门。
Cirq 使用 <a target="_blank" href="https://www.sympy.org" class="external">SymPy</a> 符号表示自由参数。
End of explanation
# Create two qubits
q0, q1 = cirq.GridQubit.rect(1, 2)
# Create a circuit on these qubits using the parameters you created above.
circuit = cirq.Circuit(
cirq.rx(a).on(q0),
cirq.ry(b).on(q1), cirq.CNOT(control=q0, target=q1))
SVGCircuit(circuit)
Explanation: 下面的代码可以使用您的参数创建一个双量子位电路:
End of explanation
# Calculate a state vector with a=0.5 and b=-0.5.
resolver = cirq.ParamResolver({a: 0.5, b: -0.5})
output_state_vector = cirq.Simulator().simulate(circuit, resolver).final_state_vector
output_state_vector
Explanation: 要评估电路,您可以使用 cirq.Simulator 接口。通过传入 cirq.ParamResolver 对象,您可以将电路中的自由参数替换为具体的数字。下面的代码可以计算参数化电路的原始状态向量输出:
End of explanation
z0 = cirq.Z(q0)
qubit_map={q0: 0, q1: 1}
z0.expectation_from_state_vector(output_state_vector, qubit_map).real
z0x1 = 0.5 * z0 + cirq.X(q1)
z0x1.expectation_from_state_vector(output_state_vector, qubit_map).real
Explanation: 如果脱离模拟环境,您将无法直接访问状态向量(请注意上面的输出中的复杂数字)。为了达到物理真实感,您必须指定一个测量值,将状态向量转换为经典计算机可以理解的实数。Cirq 使用 <a target="_blank" href="https://en.wikipedia.org/wiki/Pauli_matrices" class="external">Pauli 算子</a> $\hat{X}$、$\hat{Y}$ 和 $\hat{Z}$ 的组合指定测量值。例如,以下代码可在您刚刚模拟的状态向量上测量 $\hat{Z}_0$ 和 $\frac{1}{2}\hat{Z}_0 + \hat{X}_1$。
End of explanation
# Rank 1 tensor containing 1 circuit.
circuit_tensor = tfq.convert_to_tensor([circuit])
print(circuit_tensor.shape)
print(circuit_tensor.dtype)
Explanation: 1.2 作为向量的量子电路
TensorFlow Quantum (TFQ) 提供了 tfq.convert_to_tensor,后者是一个可以将 Cirq 对象转换为张量的函数。这样,您可以将 Cirq 对象发送到我们的<a target="_blank" href="https://tensorflow.google.cn/quantum/api_docs/python/tfq/layers">量子层</a>和<a target="_blank" href="https://tensorflow.google.cn/quantum/api_docs/python/tfq/get_expectation_op">量子运算</a>。可以在 Cirq Circuit 和 Cirq Paulis 的列表或数组中调用该函数:
End of explanation
# Rank 1 tensor containing 2 Pauli operators.
pauli_tensor = tfq.convert_to_tensor([z0, z0x1])
pauli_tensor.shape
Explanation: 这可以将 Cirq 对象编码为 tf.string 张量,tfq 运算在需要时可以对这些张量进行解码。
End of explanation
batch_vals = np.array(np.random.uniform(0, 2 * np.pi, (5, 2)), dtype=np.float32)
Explanation: 1.3 电路模拟批处理
TFQ 提供了计算期望值、样本和状态向量的方法。目前,我们先重点介绍期望值。
计算期望值的最高级接口是 tfq.layers.Expectation 层,它是一个 tf.keras.Layer。简而言之,该层等效于通过许多 cirq.ParamResolvers 模拟参数化电路;不过,TFQ 允许对随后的 TensorFlow 语义进行批处理,而电路则使用高效的 C++ 代码进行模拟。
创建一组替代 a 和 b 参数的值:
End of explanation
cirq_results = []
cirq_simulator = cirq.Simulator()
for vals in batch_vals:
resolver = cirq.ParamResolver({a: vals[0], b: vals[1]})
final_state_vector = cirq_simulator.simulate(circuit, resolver).final_state_vector
cirq_results.append(
[z0.expectation_from_state_vector(final_state_vector, {
q0: 0,
q1: 1
}).real])
print('cirq batch results: \n {}'.format(np.array(cirq_results)))
Explanation: 在 Cirq 中通过参数值对电路执行进行批处理需要一个循环:
End of explanation
tfq.layers.Expectation()(circuit,
symbol_names=[a, b],
symbol_values=batch_vals,
operators=z0)
Explanation: TFQ 中简化了同一运算:
End of explanation
# Parameters that the classical NN will feed values into.
control_params = sympy.symbols('theta_1 theta_2 theta_3')
# Create the parameterized circuit.
qubit = cirq.GridQubit(0, 0)
model_circuit = cirq.Circuit(
cirq.rz(control_params[0])(qubit),
cirq.ry(control_params[1])(qubit),
cirq.rx(control_params[2])(qubit))
SVGCircuit(model_circuit)
Explanation: 2. 混合量子-经典优化
现在,您已经了解基础知识。接下来,我们使用 TensorFlow Quantum 构造一个混合量子-经典神经网络。您将训练一个经典神经网络来控制单个量子位。我们将优化控制,以便将量子位正确准备为 0 或 1 状态,克服模拟系统校准误差。下面是架构图:
<img src="./images/nn_control1.png" width="1000">
即使没有神经网络,这也是一个很容易解决的问题,但主题与您使用 TFQ 可能解决的实际量子控制问题类似。它使用 tf.keras.Model 中的 tfq.layers.ControlledPQC(参数化量子电路)层演示了一个端到端量子-经典计算示例。
对于本教程中的实现,此架构分为以下三部分:
输入电路或数据点电路:前三个 $R$ 门。
受控电路:另外三个 $R$ 门。
控制器:设置受控电路参数的经典神经网络。
2.1 受控电路定义
如上图所示,定义可学习的单个位旋转。这与我们的受控电路对应。
End of explanation
# The classical neural network layers.
controller = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='elu'),
tf.keras.layers.Dense(3)
])
Explanation: 2.2 控制器
现在,定义控制器网络:
End of explanation
controller(tf.constant([[0.0],[1.0]])).numpy()
Explanation: 给定一批命令,控制器就会输出受控电路的一批控制信号。
控制器是随机初始化的,所以这些输出目前没有用处。
End of explanation
# This input is the simulated miscalibration that the model will learn to correct.
circuits_input = tf.keras.Input(shape=(),
# The circuit-tensor has dtype `tf.string`
dtype=tf.string,
name='circuits_input')
# Commands will be either `0` or `1`, specifying the state to set the qubit to.
commands_input = tf.keras.Input(shape=(1,),
dtype=tf.dtypes.float32,
name='commands_input')
Explanation: 2.3 将控制器连接到电路
使用 tfq 将控制器作为单个 keras.Model 连接到受控电路。
请参阅 Keras 函数式 API 指南详细了解这种样式的模型定义。
首先,定义模型的输入:
End of explanation
dense_2 = controller(commands_input)
# TFQ layer for classically controlled circuits.
expectation_layer = tfq.layers.ControlledPQC(model_circuit,
# Observe Z
operators = cirq.Z(qubit))
expectation = expectation_layer([circuits_input, dense_2])
Explanation: 接着,将运算应用到这些输入来定义计算。
End of explanation
# The full Keras model is built from our layers.
model = tf.keras.Model(inputs=[circuits_input, commands_input],
outputs=expectation)
Explanation: 现在,将此计算打包成 tf.keras.Model:
End of explanation
tf.keras.utils.plot_model(model, show_shapes=True, dpi=70)
Explanation: 该网络架构如下面的模型图所示。将模型图与架构图进行对比可验证其正确性。
注:可能需要安装了 graphviz 软件包的系统。
End of explanation
# The command input values to the classical NN.
commands = np.array([[0], [1]], dtype=np.float32)
# The desired Z expectation value at output of quantum circuit.
expected_outputs = np.array([[1], [-1]], dtype=np.float32)
Explanation: 此模型需要两个输入:控制器的命令,以及控制器尝试纠正其输出的输入电路。
2.4 数据集
该模型会尝试为每个命令的 $\hat{Z}$ 输出正确的测量值。命令和正确值的定义如下。
End of explanation
random_rotations = np.random.uniform(0, 2 * np.pi, 3)
noisy_preparation = cirq.Circuit(
cirq.rx(random_rotations[0])(qubit),
cirq.ry(random_rotations[1])(qubit),
cirq.rz(random_rotations[2])(qubit)
)
datapoint_circuits = tfq.convert_to_tensor([
noisy_preparation
] * 2) # Make two copied of this circuit
Explanation: 这并不是此任务的完整训练数据集。数据集中的每个数据点还需要一个输入电路。
2.4 输入电路定义
下面的输入电路定义该模型将学习纠正的随机校准误差。
End of explanation
datapoint_circuits.shape
Explanation: 电路有两个副本,每个数据点一个。
End of explanation
model([datapoint_circuits, commands]).numpy()
Explanation: 2.5 训练
利用定义的输入,您可以试运行 tfq 模型。
End of explanation
optimizer = tf.keras.optimizers.Adam(learning_rate=0.05)
loss = tf.keras.losses.MeanSquaredError()
model.compile(optimizer=optimizer, loss=loss)
history = model.fit(x=[datapoint_circuits, commands],
y=expected_outputs,
epochs=30,
verbose=0)
plt.plot(history.history['loss'])
plt.title("Learning to Control a Qubit")
plt.xlabel("Iterations")
plt.ylabel("Error in Control")
plt.show()
Explanation: 现在,请运行标准训练流程,针对 expected_outputs 调整这些值。
End of explanation
def check_error(command_values, desired_values):
Based on the value in `command_value` see how well you could prepare
the full circuit to have `desired_value` when taking expectation w.r.t. Z.
params_to_prepare_output = controller(command_values).numpy()
full_circuit = noisy_preparation + model_circuit
# Test how well you can prepare a state to get expectation the expectation
# value in `desired_values`
for index in [0, 1]:
state = cirq_simulator.simulate(
full_circuit,
{s:v for (s,v) in zip(control_params, params_to_prepare_output[index])}
).final_state_vector
expt = cirq.Z(qubit).expectation_from_state_vector(state, {qubit: 0}).real
print(f'For a desired output (expectation) of {desired_values[index]} with'
f' noisy preparation, the controller\nnetwork found the following '
f'values for theta: {params_to_prepare_output[index]}\nWhich gives an'
f' actual expectation of: {expt}\n')
check_error(commands, expected_outputs)
Explanation: 从此图中您可以看到,神经网络已经学会解决系统校准错误。
2.6 验证输出
现在,使用训练的模型来纠正量子位校准误差。对于 Cirq:
End of explanation
model([datapoint_circuits, commands])
Explanation: 在训练期间,损失函数的值可提供模型学习效果的大致情况。损失值越小,以上代码单元中的期望值就越接近 desired_values。如果您不太关心参数值,则随时可以使用 tfq 检查上面的输出:
End of explanation
# Define inputs.
commands_input = tf.keras.layers.Input(shape=(1),
dtype=tf.dtypes.float32,
name='commands_input')
circuits_input = tf.keras.Input(shape=(),
# The circuit-tensor has dtype `tf.string`
dtype=tf.dtypes.string,
name='circuits_input')
operators_input = tf.keras.Input(shape=(1,),
dtype=tf.dtypes.string,
name='operators_input')
Explanation: 3 学习准备不同算子的本征态
您可以随意将 $\pm \hat{Z}$ 本征态与 1 和 0 对应。但为了简便起见,您可以让 1 与 $+ \hat{Z}$ 本征态对应,而让 0 与 $-\hat{X}$ 本征态对应。一种实现方式是为每个命令指定一个不同的测量算子,如下图所示:
<img src="./images/nn_control2.png" width="1000">
这要求使用 <code>tfq.layers.Expectation</code>。现在,您的输入已经包括三个对象:电路、命令和算子。输出仍为期望值。
3.1 新模型定义
我们来看看完成此任务的模型:
End of explanation
# Define classical NN.
controller = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='elu'),
tf.keras.layers.Dense(3)
])
Explanation: 下面是控制器网络:
End of explanation
dense_2 = controller(commands_input)
# Since you aren't using a PQC or ControlledPQC you must append
# your model circuit onto the datapoint circuit tensor manually.
full_circuit = tfq.layers.AddCircuit()(circuits_input, append=model_circuit)
expectation_output = tfq.layers.Expectation()(full_circuit,
symbol_names=control_params,
symbol_values=dense_2,
operators=operators_input)
# Contruct your Keras model.
two_axis_control_model = tf.keras.Model(
inputs=[circuits_input, commands_input, operators_input],
outputs=[expectation_output])
Explanation: 使用 tfq 将电路与控制器合并到单个 keras.Model 中:
End of explanation
# The operators to measure, for each command.
operator_data = tfq.convert_to_tensor([[cirq.X(qubit)], [cirq.Z(qubit)]])
# The command input values to the classical NN.
commands = np.array([[0], [1]], dtype=np.float32)
# The desired expectation value at output of quantum circuit.
expected_outputs = np.array([[1], [-1]], dtype=np.float32)
Explanation: 3.2 数据集
现在,对于为 model_circuit 提供的每个数据点,还要包括要测量的算子:
End of explanation
optimizer = tf.keras.optimizers.Adam(learning_rate=0.05)
loss = tf.keras.losses.MeanSquaredError()
two_axis_control_model.compile(optimizer=optimizer, loss=loss)
history = two_axis_control_model.fit(
x=[datapoint_circuits, commands, operator_data],
y=expected_outputs,
epochs=30,
verbose=1)
plt.plot(history.history['loss'])
plt.title("Learning to Control a Qubit")
plt.xlabel("Iterations")
plt.ylabel("Error in Control")
plt.show()
Explanation: 3.3 训练
现在,您已经有了新的输入和输出,可以使用 Keras 重新进行训练。
End of explanation
controller.predict(np.array([0,1]))
Explanation: 损失函数的值已降为零。
controller 可作为独立模型提供。您可以调用控制器,并检查其对每个命令信号的响应。要正确对比这些输出与 random_rotations 的内容,您可能需要花些功夫。
End of explanation |
508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 2
Imports
Step2: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 2
Imports
End of explanation
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
x=yvec[0]
y=yvec[1]
z=yvec[2]
dx=sigma*(y-x)
dy=x*(rho-z)-y
dz=x*y-beta*z
return np.array([dx,dy,dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
t=np.linspace(0,max_time,int(250.0*max_time))
soln=odeint(lorentz_derivs,ic,t,args=(sigma,rho,beta))
return soln,t
assert True # leave this to grade solve_lorenz
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
np.random.seed(1)
ic=np.random.rand(N,3)*30-15
plt.figure(figsize=(9,6))
# This takes the solutions of solve_lorentz in the x and z position of the
# array and uses the initial conditions of their respective positions.
for i in ic:
plt.plot(solve_lorentz(i,max_time,sigma,rho,beta)[0][:,0],solve_lorentz(i,max_time,sigma,rho,beta)[0][:,2]);
# I could not find a way to make the color mapping work
plt.xlabel('x(t)'),plt.ylabel('z(t)');
plt.title('Lorentz Parametric System')
plot_lorentz();
assert True # leave this to grade the plot_lorenz function
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
interact(plot_lorentz, max_time=[1,10], N=[1,50], sigma=[0.0,50.0], rho=[0.0,50.0], beta=fixed(8/3));
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation |
509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cervix EDA
In this competition we have a multi-class classification problem with three classes. We are asked, given an image, to identify the cervix type.
From the data description
Step1: We are given training images for each of cervix types. Lets first count them for each class.
Step2: Image types
Now that we have the data in a handy dataframe we can do a few aggregations on the data. Let us first see how many images there are for each cervix type and which file types they have.
All files are in JPG format and Type 2 is the most common one with a little bit more than 50% in the training data in total, Type 1 on the other hand has a little bit less than 20% in the training data.
Step3: Now, lets read the files for each type to get an idea about how the images look like.
The images seem to vary alot in they formats, the first two samples have only a circular area with the actual image, the last sample has the image in a rectangle.
Step4: Additional images
Step5: All images | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from skimage.io import imread, imshow
import cv2
%matplotlib inline
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
from subprocess import check_output
print(check_output(["ls", "../input/train"]).decode("utf8"))
Explanation: Cervix EDA
In this competition we have a multi-class classification problem with three classes. We are asked, given an image, to identify the cervix type.
From the data description:
In this competition, you will develop algorithms to correctly classify cervix types based on cervical images. These different types of cervix in our data set are all considered normal (not cancerous), but since the transformation zones aren't always visible, some of the patients require further testing while some don't. This decision is very important for the healthcare provider and critical for the patient. Identifying the transformation zones is not an easy task for the healthcare providers, therefore, an algorithm-aided decision will significantly improve the quality and efficiency of cervical cancer screening for these patients.
The submission format is asking for a probability for each of the three different cervix types.
In this notebook we will be looking at:
basic dataset stats like number of samples per class, image sizes
different embeddings of RGB image space
pairwise distances and a clustermap of images in RGB space
(linear) model selection with basic multi class evaluation metrics.
If you like this kernel, please give an upvote, thanks! :)
End of explanation
from glob import glob
basepath = '../input/train/'
all_cervix_images = []
for path in sorted(glob(basepath + "*")):
cervix_type = path.split("/")[-1]
cervix_images = sorted(glob(basepath + cervix_type + "/*"))
all_cervix_images = all_cervix_images + cervix_images
all_cervix_images = pd.DataFrame({'imagepath': all_cervix_images})
all_cervix_images['filetype'] = all_cervix_images.apply(lambda row: row.imagepath.split(".")[-1], axis=1)
all_cervix_images['type'] = all_cervix_images.apply(lambda row: row.imagepath.split("/")[-2], axis=1)
all_cervix_images.head()
Explanation: We are given training images for each of cervix types. Lets first count them for each class.
End of explanation
print('We have a total of {} images in the whole dataset'.format(all_cervix_images.shape[0]))
type_aggregation = all_cervix_images.groupby(['type', 'filetype']).agg('count')
type_aggregation_p = type_aggregation.apply(lambda row: 1.0*row['imagepath']/all_cervix_images.shape[0], axis=1)
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(10, 8))
type_aggregation.plot.barh(ax=axes[0])
axes[0].set_xlabel("image count")
type_aggregation_p.plot.barh(ax=axes[1])
axes[1].set_xlabel("training size fraction")
Explanation: Image types
Now that we have the data in a handy dataframe we can do a few aggregations on the data. Let us first see how many images there are for each cervix type and which file types they have.
All files are in JPG format and Type 2 is the most common one with a little bit more than 50% in the training data in total, Type 1 on the other hand has a little bit less than 20% in the training data.
End of explanation
fig = plt.figure(figsize=(12,8))
i = 1
for t in all_cervix_images['type'].unique():
ax = fig.add_subplot(1,3,i)
i+=1
f = all_cervix_images[all_cervix_images['type'] == t]['imagepath'].values[0]
plt.imshow(plt.imread(f))
plt.title('sample for cervix {}'.format(t))
Explanation: Now, lets read the files for each type to get an idea about how the images look like.
The images seem to vary alot in they formats, the first two samples have only a circular area with the actual image, the last sample has the image in a rectangle.
End of explanation
print(check_output(["ls", "../input/additional"]).decode("utf8"))
basepath = '../input/additional/'
all_cervix_images_a = []
for path in sorted(glob(basepath + "*")):
cervix_type = path.split("/")[-1]
cervix_images = sorted(glob(basepath + cervix_type + "/*"))
all_cervix_images_a = all_cervix_images_a + cervix_images
all_cervix_images_a = pd.DataFrame({'imagepath': all_cervix_images_a})
all_cervix_images_a['filetype'] = all_cervix_images_a.apply(lambda row: row.imagepath.split(".")[-1], axis=1)
all_cervix_images_a['type'] = all_cervix_images_a.apply(lambda row: row.imagepath.split("/")[-2], axis=1)
all_cervix_images_a.head()
print('We have a total of {} images in the whole dataset'.format(all_cervix_images_a.shape[0]))
type_aggregation = all_cervix_images_a.groupby(['type', 'filetype']).agg('count')
type_aggregation_p = type_aggregation.apply(lambda row: 1.0*row['imagepath']/all_cervix_images_a.shape[0], axis=1)
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(10, 8))
type_aggregation.plot.barh(ax=axes[0])
axes[0].set_xlabel("image count")
type_aggregation_p.plot.barh(ax=axes[1])
axes[1].set_xlabel("training size fraction")
fig = plt.figure(figsize=(12,8))
i = 1
for t in all_cervix_images_a['type'].unique():
ax = fig.add_subplot(1,3,i)
i+=1
f = all_cervix_images_a[all_cervix_images_a['type'] == t]['imagepath'].values[0]
plt.imshow(plt.imread(f))
plt.title('sample for cervix {}'.format(t))
Explanation: Additional images
End of explanation
all_cervix_images_ = pd.concat( [all_cervix_images, all_cervix_images_a], join='outer' )
#all_cervix_images_ = all_cervix_images.append(all_cervix_images_a)
#all_cervix_images_a.merge(all_cervix_images,how='left')
#all_cervix_images_ = pd.DataFrame({'imagepath': all_cervix_images_})
#all_cervix_images_['filetype'] = all_cervix_images_.apply(lambda row: row.imagepath.split(".")[-1], axis=1)
#all_cervix_images_['type'] = all_cervix_images_.apply(lambda row: row.imagepath.split("/")[-2], axis=1)
#all_cervix_images_.head()
print(all_cervix_images_)
print('We have a total of {} images in the whole dataset'.format(all_cervix_images_.shape[0]))
type_aggregation = all_cervix_images_.groupby(['type', 'filetype']).agg('count')
type_aggregation_p = type_aggregation.apply(lambda row: 1.0*row['imagepath']/all_cervix_images_a.shape[0], axis=1)
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(10, 8))
type_aggregation.plot.barh(ax=axes[0])
axes[0].set_xlabel("image count")
type_aggregation_p.plot.barh(ax=axes[1])
axes[1].set_xlabel("training size fraction")
fig = plt.figure(figsize=(12,8))
i = 1
for t in all_cervix_images_['type'].unique():
ax = fig.add_subplot(1,3,i)
i+=1
f = all_cervix_images_[all_cervix_images_['type'] == t]['imagepath'].values[0]
plt.imshow(plt.imread(f))
plt.title('sample for cervix {}'.format(t))
Explanation: All images
End of explanation |
510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Building the LSTM model for Language Modeling
Now that we know exactly what we are doing, we can start building our model using TensorFlow. The very first thing we need to do is download and extract the simple-examples dataset, which can be done by executing the code cell below.
Step2: Additionally, for the sake of making it easy to play around with the model's hyperparameters, we can declare them beforehand. Feel free to change these -- you will see a difference in performance each time you change those!
Step3: Some clarifications for LSTM architecture based on the argumants
Step4: Lets just read one mini-batch now and feed our network
Step5: Lets look at 3 sentences of our input x
Step6: we define 2 place holders to feed them with mini-batchs, that is x and y
Step7: lets defin a dictionary, and use it later to feed the placeholders with our first mini-batch
Step8: For example, we can use it to feed _input_data
Step9: In this step, we create the stacked LSTM, which is a 2 layer LSTM network
Step10: Also, we initialize the states of the nework
Step11: lets look at the states, though they are all zero for now
Step12: Embeddings
We create the embeddings for our input data. embedding is dictionary of [10000x200] for all 10000 unique words.
Step13: embedding_lookup goes to each row of input_data, and for each word in the row/sentence, finds the correspond vector in embedding.
It creates a [3020200] matrix, so, the first elemnt of inputs (the first sentence), is a matrix of 20x200, which each row of it is vector representing a word in the sentence.
Step14: Constructing Recurrent Neural Networks
tf.nn.dynamicrnn() creates a recurrent neural network using stacked_lstm which is an instance of RNNCell.
The input should be a Tensor of shape
Step15: so, lets look at the outputs. The output of the stackedLSTM comes from 200 hidden_layer, and in each time step(=20), one of them get activated. we use the linear activation to map the 200 hidden layer to a [?x10 matrix]
Step16: Lets reshape the output tensor from [30 x 20 x 200] to [600 x 200]
Step17: logistic unit
Now, we create a logistic unit to return the probability of the output word. That is, mapping the 600
Softmax = [600 x 200]* [200 x 1000]+ [1 x 1000] -> [600 x 1000]
Step18: Prediction
The maximum probablity
Step19: So, what is the ground truth for the first word of first sentence?
Step20: Also, you can get it from target tensor, if you want to find the embedding vector
Step21: It is time to compare logit with target
Step22: Objective function
Now we want to define our objective function. Our objective is to minimize loss function, that is, to minimize the average negative log probability of the target words
Step23: loss is a 1D batch-sized float Tensor [600x1]
Step24: Now, lets store the new state as final state
Step25: Training
To do gradient clipping in TensorFlow we have to take the following steps
Step26: 2. Trainable Variables
Definining a variable, if you passed trainable=True, the Variable() constructor automatically adds new variables to the graph collection GraphKeys.TRAINABLE_VARIABLES. Now, using tf.trainable_variables() you can get all variables created with trainable=True.
Step27: we can find the name and scope of all variables
Step28: 3. Calculate the gradients based on the loss function
Step29: Gradient
Step30: The tf.gradients() function allows you to compute the symbolic gradient of one tensor with respect to one or more other tensors—including variables. tf.gradients(func,xs) constructs symbolic partial derivatives of sum of func w.r.t. x in xs.
Now, lets look at the derivitive w.r.t. var_x
Step31: the derivitive w.r.t. var_y
Step32: Now, we can look at gradients w.r.t all variables
Step33: now, we have a list of tensors, t-list. We can use it to find clipped tensors. clip_by_global_norm clips values of multiple tensors by the ratio of the sum of their norms.
clip_by_global_norm get t-list as input and returns 2 things
Step34: 4. Apply the optimizer to the variables / gradients tuple.
Step35: We learned how the model is build step by step. Noe, let's then create a Class that represents our model. This class needs a few things
Step36: With that, the actual structure of our Recurrent Neural Network with Long Short-Term Memory is finished. What remains for us to do is to actually create the methods to run through time -- that is, the run_epoch method to be run at each epoch and a main script which ties all of this together.
What our run_epoch method should do is take our input data and feed it to the relevant operations. This will return at the very least the current result for the cost function.
Step37: Now, we create the main method to tie everything together. The code here reads the data from the directory, using the reader helper module, and then trains and evaluates the model on both a testing and a validating subset of data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import os
print('TensorFlow version: ', tf.__version__)
tf.reset_default_graph()
if not os.path.isfile('./penn_treebank_reader.py'):
print('Downloading penn_treebank_reader.py...')
!wget -q -O ../../data/Penn_Treebank/ptb.zip https://ibm.box.com/shared/static/z2yvmhbskc45xd2a9a4kkn6hg4g4kj5r.zip
!unzip -o ../../data/Penn_Treebank/ptb.zip -d ../data/Penn_Treebank
!cp ../../data/Penn_Treebank/ptb/reader.py ./penn_treebank_reader.py
else:
print('Using local penn_treebank_reader.py...')
import penn_treebank_reader as reader
Explanation: <a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/jvcqp2iy2jlx2b32rmzdt0tx8lvxgzkp.png" width = 300, align = "center"></a>
<h1 align=center><font size = 5>RECURRENT NETWORKS and LSTM IN DEEP LEARNING</font></h1>
Applying Recurrent Neural Networks/LSTM for Language Modelling
Hello and welcome to this part. In this notebook, we will go over the topic of what Language Modelling is and create a Recurrent Neural Network model based on the Long Short-Term Memory unit to train and be benchmarked by the Penn Treebank. By the end of this notebook, you should be able to understand how TensorFlow builds and executes a RNN model for Language Modelling.
The Objective
By now, you should have an understanding of how Recurrent Networks work -- a specialized model to process sequential data by keeping track of the "state" or context. In this notebook, we go over a TensorFlow code snippet for creating a model focused on Language Modelling -- a very relevant task that is the cornerstone of many different linguistic problems such as Speech Recognition, Machine Translation and Image Captioning. For this, we will be using the Penn Treebank, which is an often-used dataset for benchmarking Language Modelling models.
What exactly is Language Modelling?
Language Modelling, to put it simply, is the task of assigning probabilities to sequences of words. This means that, given a context of one or a few words in the language the model was trained on, the model should have a knowledge of what are the most probable words or sequence of words for the sentence. Language Modelling is one of the tasks under Natural Language Processing, and one of the most important.
<img src=https://ibm.box.com/shared/static/1d1i5gub6wljby2vani2vzxp0xsph702.png width="768"/>
<center>Example of a sentence being predicted</center>
In this example, one can see the predictions for the next word of a sentence, given the context "This is an". As you can see, this boils down to a sequential data analysis task -- you are given a word or a sequence of words (the input data), and, given the context (the state), you need to find out what is the next word (the prediction). This kind of analysis is very important for language-related tasks such as Speech Recognition, Machine Translation, Image Captioning, Text Correction and many other very relevant problems.
<img src=https://ibm.box.com/shared/static/az39idf9ipfdpc5ugifpgxnydelhyf3i.png width="1080"/>
<center>The above example schematized as an RNN in execution</center>
As the above image shows, Recurrent Network models fit this problem like a glove. Alongside LSTM and its capacity to maintain the model's state for over one thousand time steps, we have all the tools we need to undertake this problem. The goal for this notebook is to create a model that can reach low levels of perplexity on our desired dataset.
For Language Modelling problems, perplexity is the way to gauge efficiency. Perplexity is simply a measure of how well a probabilistic model is able to predict its sample. A higher-level way to explain this would be saying that low perplexity means a higher degree of trust in the predictions the model makes. Therefore, the lower perplexity is, the better.
The Penn Treebank dataset
Historically, datasets big enough for Natural Language Processing are hard to come by. This is in part due to the necessity of the sentences to be broken down and tagged with a certain degree of correctness -- or else the models trained on it won't be able to be correct at all. This means that we need a large amount of data, annotated by or at least corrected by humans. This is, of course, not an easy task at all.
The Penn Treebank, or PTB for short, is a dataset maintained by the University of Pennsylvania. It is huge -- there are over four million and eight hundred thousand annotated words in it, all corrected by humans. It is composed of many different sources, from abstracts of Department of Energy papers to texts from the Library of America. Since it is verifiably correct and of such a huge size, the Penn Treebank has been used time and time again as a benchmark dataset for Language Modelling.
The dataset is divided in different kinds of annotations, such as Piece-of-Speech, Syntactic and Semantic skeletons. For this example, we will simply use a sample of clean, non-annotated words (with the exception of one tag -- <unk>, which is used for rare words such as uncommon proper nouns) for our model. This means that we just want to predict what the next words would be, not what they mean in context or their classes on a given sentence.
<br/>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<center>the percentage of lung cancer deaths among the workers at the west `<unk>` mass. paper factory appears to be the highest for any asbestos workers studied in western industrialized countries he said
the plant which is owned by `<unk>` & `<unk>` co. was under contract with `<unk>` to make the cigarette filters
the finding probably will support those who argue that the u.s. should regulate the class of asbestos including `<unk>` more `<unk>` than the common kind of asbestos `<unk>` found in most schools and other buildings dr. `<unk>` said</center>
</div>
<center>Example of text from the dataset we are going to use, ptb.train</center>
<br/>
<h2>Word Embeddings</h2>
For better processing, in this example, we will make use of word embeddings, which are a way of representing sentence structures or words as n-dimensional vectors (where n is a reasonably high number, such as 200 or 500) of real numbers. Basically, we will assign each word a randomly-initialized vector, and input those into the network to be processed. After a number of iterations, these vectors are expected to assume values that help the network to correctly predict what it needs to -- in our case, the probable next word in the sentence. This is shown to be very effective in Natural Language Processing tasks, and is a commonplace practice.
<br/><br/>
<font size = 4><strong>
$$Vec("Example") = [0.02, 0.00, 0.00, 0.92, 0.30,...]$$
</font></strong>
<br/>
Word Embedding tends to group up similarly used words reasonably together in the vectorial space. For example, if we use T-SNE (a dimensional reduction visualization algorithm) to flatten the dimensions of our vectors into a 2-dimensional space and use the words these vectors represent as their labels, we might see something like this:
<img src=https://ibm.box.com/shared/static/bqhc5dg879gcoabzhxra1w8rkg3od1cu.png width="800"/>
<center>T-SNE Mockup with clusters marked for easier visualization</center>
As you can see, words that are frequently used together, in place of each other, or in the same places as them tend to be grouped together -- being closer together the higher these correlations are. For example, "None" is pretty semantically close to "Zero", while a phrase that uses "Italy" can probably also fit "Germany" in it, with little damage to the sentence structure. A vectorial "closeness" for similar words like this is a great indicator of a well-built model.
We need to import the necessary modules for our code. We need numpy and tensorflow, obviously. Additionally, we can import directly the tensorflow.models.rnn.rnn model, which includes the function for building RNNs, and tensorflow.models.rnn.ptb.reader which is the helper module for getting the input data from the dataset we just downloaded.
If you want to learm more take a look at https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/
<br/>
End of explanation
if not os.path.isfile('../data/Penn_Treebank/simple_examples.tgz'):
!wget -O ../../data/Penn_Treebank/simple_examples.tgz http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz
!tar xzf ../../data/Penn_Treebank/simple_examples.tgz -C ../data/Penn_Treebank/
Explanation: Building the LSTM model for Language Modeling
Now that we know exactly what we are doing, we can start building our model using TensorFlow. The very first thing we need to do is download and extract the simple-examples dataset, which can be done by executing the code cell below.
End of explanation
#Initial weight scale
init_scale = 0.1
#Initial learning rate
learning_rate = 1.0
#Maximum permissible norm for the gradient (For gradient clipping -- another measure against Exploding Gradients)
max_grad_norm = 5
#The number of layers in our model
num_layers = 2
#The total number of recurrence steps, also known as the number of layers when our RNN is "unfolded"
num_steps = 20
#The number of processing units (neurons) in the hidden layers
hidden_size = 200
#The maximum number of epochs trained with the initial learning rate
max_epoch = 4
#The total number of epochs in training
max_max_epoch = 13
#The probability for keeping data in the Dropout Layer (This is an optimization, but is outside our scope for this notebook!)
#At 1, we ignore the Dropout Layer wrapping.
keep_prob = 1
#The decay for the learning rate
decay = 0.5
#The size for each batch of data
batch_size = 30
#The size of our vocabulary
vocab_size = 10000
#Training flag to separate training from testing
is_training = 1
#Data directory for our dataset
data_dir = "../../data/Penn_Treebank/simple-examples/data/"
Explanation: Additionally, for the sake of making it easy to play around with the model's hyperparameters, we can declare them beforehand. Feel free to change these -- you will see a difference in performance each time you change those!
End of explanation
session=tf.InteractiveSession()
# Reads the data and separates it into training data, validation data and testing data
raw_data = reader.ptb_raw_data(data_dir)
train_data, valid_data, test_data, _ = raw_data
Explanation: Some clarifications for LSTM architecture based on the argumants:
Network structure:
- In this network, the number of LSTM cells are 2. To give the model more expressive power, we can add multiple layers of LSTMs to process the data. The output of the first layer will become the input of the second and so on.
- The recurrence steps is 20, that is, when our RNN is "Unfolded", the recurrence step is 20.
- the structure is like:
- 200 input units -> [200x200] Weight -> 200 Hidden units (first layer) -> [200x200] Weight matrix -> 200 Hidden units (second layer) -> [200] weight Matrix -> 200 unit output
Hidden layer:
- Each LSTM has 200 hidden units which is equivalant to the dimensianality of the embedding words and output.
Input layer:
- The network has 200 input units.
- Suppose each word is represented by an embedding vector of dimensionality e=200. The input layer of each cell will have 200 linear units. These e=200 linear units are connected to each of the h=200 LSTM units in the hidden layer (assuming there is only one hidden layer, though our case has 2 layers).
- The input shape is [batch_size, num_steps], that is [30x20]. It will turn into [30x20x200] after embedding, and then 20x[30x200]
There is a lot to be done and a ton of information to process at the same time, so go over this code slowly. It may seem complex at first, but if you try to ally what you just learned about language modelling to the code you see, you should be able to understand it.
This code is adapted from the PTBModel example bundled with the TensorFlow source code.
Train data
The story starts from data:
- Train data is a list of words, represented by numbers - N=929589 numbers, e.g. [9971, 9972, 9974, 9975,...]
- We read data as mini-batch of size b=30. Assume the size of each sentence is 20 words (num_steps = 20). Then it will take int(N/b*h)+1=1548 iterations for the learner to go through all sentences once. So, the number of iterators is 1548
- Each batch data is read from train dataset of size 600, and shape of [30x20]
First we start an interactive session:
End of explanation
itera = reader.ptb_iterator(train_data, batch_size, num_steps)
first_touple=next(itera)
x=first_touple[0]
y=first_touple[1]
x.shape
Explanation: Lets just read one mini-batch now and feed our network:
End of explanation
x[0:3]
size = hidden_size
Explanation: Lets look at 3 sentences of our input x:
End of explanation
_input_data = tf.placeholder(tf.int32, [batch_size, num_steps]) #[30#20]
_targets = tf.placeholder(tf.int32, [batch_size, num_steps]) #[30#20]
Explanation: we define 2 place holders to feed them with mini-batchs, that is x and y:
End of explanation
feed_dict={_input_data:x, _targets:y}
Explanation: lets defin a dictionary, and use it later to feed the placeholders with our first mini-batch:
End of explanation
session.run(_input_data,feed_dict)
Explanation: For example, we can use it to feed _input_data:
End of explanation
stacked_lstm = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(hidden_size, forget_bias=0.0)
for _ in range(num_layers)]
)
Explanation: In this step, we create the stacked LSTM, which is a 2 layer LSTM network:
End of explanation
_initial_state = stacked_lstm.zero_state(batch_size, tf.float32)
_initial_state
Explanation: Also, we initialize the states of the nework:
_initial_state
For each LCTM, there are 2 state matrics, c_state and m_state. c_state and m_state represent "Memory State" and "Cell State". Each hidden layer, has a vector of size 30, which keeps the states. so, for 200 hidden units in each LSTM, we have a matrix of size [30x200]
End of explanation
session.run(_initial_state,feed_dict)
Explanation: lets look at the states, though they are all zero for now:
End of explanation
try:
embedding = tf.get_variable("embedding", [vocab_size, hidden_size]) #[10000x200]
except ValueError:
pass
embedding.get_shape().as_list()
session.run(tf.global_variables_initializer())
session.run(embedding, feed_dict)
Explanation: Embeddings
We create the embeddings for our input data. embedding is dictionary of [10000x200] for all 10000 unique words.
End of explanation
# Define where to get the data for our embeddings from
inputs = tf.nn.embedding_lookup(embedding, _input_data) #shape=(30, 20, 200)
inputs
session.run(inputs[0], feed_dict)
Explanation: embedding_lookup goes to each row of input_data, and for each word in the row/sentence, finds the correspond vector in embedding.
It creates a [3020200] matrix, so, the first elemnt of inputs (the first sentence), is a matrix of 20x200, which each row of it is vector representing a word in the sentence.
End of explanation
outputs, new_state = tf.nn.dynamic_rnn(stacked_lstm, inputs, initial_state=_initial_state)
Explanation: Constructing Recurrent Neural Networks
tf.nn.dynamicrnn() creates a recurrent neural network using stacked_lstm which is an instance of RNNCell.
The input should be a Tensor of shape: [batch_size, max_time, ...], in our case it would be (30, 20, 200)
This method, returns a pair (outputs, new_state) where:
- outputs is a length T list of outputs (one for each input), or a nested tuple of such elements.
- new_state is the final state
End of explanation
outputs
session.run(tf.global_variables_initializer())
session.run(outputs[0], feed_dict)
Explanation: so, lets look at the outputs. The output of the stackedLSTM comes from 200 hidden_layer, and in each time step(=20), one of them get activated. we use the linear activation to map the 200 hidden layer to a [?x10 matrix]
End of explanation
output = tf.reshape(outputs, [-1, size])
output
session.run(output[0], feed_dict)
Explanation: Lets reshape the output tensor from [30 x 20 x 200] to [600 x 200]
End of explanation
softmax_w = tf.get_variable("softmax_w", [size, vocab_size]) #[200x1000]
softmax_b = tf.get_variable("softmax_b", [vocab_size]) #[1x1000]
logits = tf.matmul(output, softmax_w) + softmax_b
session.run(tf.global_variables_initializer())
logi = session.run(logits, feed_dict)
logi.shape
First_word_output_probablity = logi[0]
First_word_output_probablity.shape
Explanation: logistic unit
Now, we create a logistic unit to return the probability of the output word. That is, mapping the 600
Softmax = [600 x 200]* [200 x 1000]+ [1 x 1000] -> [600 x 1000]
End of explanation
embedding_array= session.run(embedding, feed_dict)
np.argmax(First_word_output_probablity)
Explanation: Prediction
The maximum probablity
End of explanation
y[0][0]
Explanation: So, what is the ground truth for the first word of first sentence?
End of explanation
_targets
Explanation: Also, you can get it from target tensor, if you want to find the embedding vector:
End of explanation
targ = session.run(tf.reshape(_targets, [-1]), feed_dict)
first_word_target_code= targ[0]
first_word_target_code
first_word_target_vec = session.run( tf.nn.embedding_lookup(embedding, targ[0]))
first_word_target_vec
Explanation: It is time to compare logit with target
End of explanation
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example([logits], [tf.reshape(_targets, [-1])],[tf.ones([batch_size * num_steps])])
Explanation: Objective function
Now we want to define our objective function. Our objective is to minimize loss function, that is, to minimize the average negative log probability of the target words:
loss=−1N∑i=1Nlnptargeti
This function is already implimented and available in TensorFlow through sequence_loss_by_example so we can just use it here. sequence_loss_by_example is weighted cross-entropy loss for a sequence of logits (per example).
Its arguments:
logits: List of 2D Tensors of shape [batch_size x num_decoder_symbols].
targets: List of 1D batch-sized int32 Tensors of the same length as logits.
weights: List of 1D batch-sized float-Tensors of the same length as logits.
End of explanation
session.run(loss, feed_dict)
cost = tf.reduce_sum(loss) / batch_size
session.run(tf.global_variables_initializer())
session.run(cost, feed_dict)
Explanation: loss is a 1D batch-sized float Tensor [600x1]: The log-perplexity for each sequence.
End of explanation
#
final_state = new_state
Explanation: Now, lets store the new state as final state
End of explanation
# Create a variable for the learning rate
lr = tf.Variable(0.0, trainable=False)
# Create the gradient descent optimizer with our learning rate
optimizer = tf.train.GradientDescentOptimizer(lr)
Explanation: Training
To do gradient clipping in TensorFlow we have to take the following steps:
Define the optimizer.
Extract variables that are trainable.
Calculate the gradients based on the loss function.
Apply the optimizer to the variables / gradients tuple.
1. Define Optimizer
GradientDescentOptimizer constructs a new gradient descent optimizer. Later, we use constructed optimizer to compute gradients for a loss and apply gradients to variables.
End of explanation
# Get all TensorFlow variables marked as "trainable" (i.e. all of them except _lr, which we just created)
tvars = tf.trainable_variables()
tvars
Explanation: 2. Trainable Variables
Definining a variable, if you passed trainable=True, the Variable() constructor automatically adds new variables to the graph collection GraphKeys.TRAINABLE_VARIABLES. Now, using tf.trainable_variables() you can get all variables created with trainable=True.
End of explanation
tvars=tvars[3:]
[v.name for v in tvars]
Explanation: we can find the name and scope of all variables:
End of explanation
cost
tvars
Explanation: 3. Calculate the gradients based on the loss function
End of explanation
var_x = tf.placeholder(tf.float32)
var_y = tf.placeholder(tf.float32)
func_test = 2.0*var_x*var_x + 3.0*var_x*var_y
session.run(tf.global_variables_initializer())
feed={var_x:1.0,var_y:2.0}
session.run(func_test, feed)
Explanation: Gradient:
The gradient of a function (line) is the slope of the line, or the rate of change of a function. It's a vector (a direction to move) that points in the direction of greatest increase of the function, and calculated by derivative operation.
First lets recall the gradient function using an toy example:
$$ z=\left(2x^2+3xy\right)$$
End of explanation
var_grad = tf.gradients(func_test, [var_x])
session.run(var_grad,feed)
Explanation: The tf.gradients() function allows you to compute the symbolic gradient of one tensor with respect to one or more other tensors—including variables. tf.gradients(func,xs) constructs symbolic partial derivatives of sum of func w.r.t. x in xs.
Now, lets look at the derivitive w.r.t. var_x:
$$ \frac{\partial \:}{\partial \:x}\left(2x^2+3xy\right)=4x+3y $$
End of explanation
var_grad = tf.gradients(func_test, [var_y])
session.run(var_grad,feed)
Explanation: the derivitive w.r.t. var_y:
$$ \frac{\partial \:}{\partial \:x}\left(2x^2+3xy\right)=3x $$
End of explanation
tf.gradients(cost, tvars)
grad_t_list = tf.gradients(cost, tvars)
#sess.run(grad_t_list,feed_dict)
Explanation: Now, we can look at gradients w.r.t all variables:
End of explanation
max_grad_norm
# Define the gradient clipping threshold
grads, _ = tf.clip_by_global_norm(grad_t_list, max_grad_norm)
grads
session.run(grads,feed_dict)
Explanation: now, we have a list of tensors, t-list. We can use it to find clipped tensors. clip_by_global_norm clips values of multiple tensors by the ratio of the sum of their norms.
clip_by_global_norm get t-list as input and returns 2 things:
- a list of clipped tensors, so called list_clipped
- the global norm (global_norm) of all tensors in t_list
End of explanation
# Create the training TensorFlow Operation through our optimizer
train_op = optimizer.apply_gradients(zip(grads, tvars))
session.run(tf.global_variables_initializer())
session.run(train_op,feed_dict)
Explanation: 4. Apply the optimizer to the variables / gradients tuple.
End of explanation
class PTBModel(object):
def __init__(self, is_training):
######################################
# Setting parameters for ease of use #
######################################
self.batch_size = batch_size
self.num_steps = num_steps
size = hidden_size
self.vocab_size = vocab_size
###############################################################################
# Creating placeholders for our input data and expected outputs (target data) #
###############################################################################
self._input_data = tf.placeholder(tf.int32, [batch_size, num_steps]) #[30#20]
self._targets = tf.placeholder(tf.int32, [batch_size, num_steps]) #[30#20]
##########################################################################
# Creating the LSTM cell structure and connect it with the RNN structure #
##########################################################################
# Create the LSTM unit.
# This creates only the structure for the LSTM and has to be associated with a RNN unit still.
# The argument n_hidden(size=200) of BasicLSTMCell is size of hidden layer, that is, the number of hidden units of the LSTM (inside A).
# Size is the same as the size of our hidden layer, and no bias is added to the Forget Gate.
# LSTM cell processes one word at a time and computes probabilities of the possible continuations of the sentence.
lstm_cells = []
reuse = tf.get_variable_scope().reuse
for _ in range(num_layers):
cell = tf.contrib.rnn.BasicLSTMCell(size, forget_bias=0.0, reuse=reuse)
if is_training and keep_prob < 1:
# Unless you changed keep_prob, this won't actually execute -- this is a dropout wrapper for our LSTM unit
# This is an optimization of the LSTM output, but is not needed at all
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
lstm_cells.append(cell)
# By taking in the LSTM cells as parameters, the MultiRNNCell function junctions the LSTM units to the RNN units.
# RNN cell composed sequentially of multiple simple cells.
stacked_lstm = tf.contrib.rnn.MultiRNNCell(lstm_cells)
# Define the initial state, i.e., the model state for the very first data point
# It initialize the state of the LSTM memory. The memory state of the network is initialized with a vector of zeros and gets updated after reading each word.
self._initial_state = stacked_lstm.zero_state(batch_size, tf.float32)
####################################################################
# Creating the word embeddings and pointing them to the input data #
####################################################################
with tf.device("/cpu:0"):
# Create the embeddings for our input data. Size is hidden size.
# Uses default variable initializer
embedding = tf.get_variable("embedding", [vocab_size, size]) #[10000x200]
# Define where to get the data for our embeddings from
inputs = tf.nn.embedding_lookup(embedding, self._input_data)
# Unless you changed keep_prob, this won't actually execute -- this is a dropout addition for our inputs
# This is an optimization of the input processing and is not needed at all
if is_training and keep_prob < 1:
inputs = tf.nn.dropout(inputs, keep_prob)
############################################
# Creating the input structure for our RNN #
############################################
# Input structure is 20x[30x200]
# Considering each word is represended by a 200 dimentional vector, and we have 30 batchs, we create 30 word-vectors of size [30xx2000]
#inputs = [tf.squeeze(input_, [1]) for input_ in tf.split(1, num_steps, inputs)]
# The input structure is fed from the embeddings, which are filled in by the input data
# Feeding a batch of b sentences to a RNN:
# In step 1, first word of each of the b sentences (in a batch) is input in parallel.
# In step 2, second word of each of the b sentences is input in parallel.
# The parallelism is only for efficiency.
# Each sentence in a batch is handled in parallel, but the network sees one word of a sentence at a time and does the computations accordingly.
# All the computations involving the words of all sentences in a batch at a given time step are done in parallel.
####################################################################################################
# Instanciating our RNN model and retrieving the structure for returning the outputs and the state #
####################################################################################################
outputs, state = tf.nn.dynamic_rnn(stacked_lstm, inputs, initial_state=self._initial_state)
#########################################################################
# Creating a logistic unit to return the probability of the output word #
#########################################################################
output = tf.reshape(outputs, [-1, size])
softmax_w = tf.get_variable("softmax_w", [size, vocab_size]) #[200x1000]
softmax_b = tf.get_variable("softmax_b", [vocab_size]) #[1x1000]
logits = tf.matmul(output, softmax_w) + softmax_b
#########################################################################
# Defining the loss and cost functions for the model's learning to work #
#########################################################################
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example([logits], [tf.reshape(self._targets, [-1])],
[tf.ones([batch_size * num_steps])])
self._cost = cost = tf.reduce_sum(loss) / batch_size
# Store the final state
self._final_state = state
#Everything after this point is relevant only for training
if not is_training:
return
#################################################
# Creating the Training Operation for our Model #
#################################################
# Create a variable for the learning rate
self._lr = tf.Variable(0.0, trainable=False)
# Get all TensorFlow variables marked as "trainable" (i.e. all of them except _lr, which we just created)
tvars = tf.trainable_variables()
# Define the gradient clipping threshold
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), max_grad_norm)
# Create the gradient descent optimizer with our learning rate
optimizer = tf.train.GradientDescentOptimizer(self.lr)
# Create the training TensorFlow Operation through our optimizer
self._train_op = optimizer.apply_gradients(zip(grads, tvars))
# Helper functions for our LSTM RNN class
# Assign the learning rate for this model
def assign_lr(self, session, lr_value):
session.run(tf.assign(self.lr, lr_value))
# Returns the input data for this model at a point in time
@property
def input_data(self):
return self._input_data
# Returns the targets for this model at a point in time
@property
def targets(self):
return self._targets
# Returns the initial state for this model
@property
def initial_state(self):
return self._initial_state
# Returns the defined Cost
@property
def cost(self):
return self._cost
# Returns the final state for this model
@property
def final_state(self):
return self._final_state
# Returns the current learning rate for this model
@property
def lr(self):
return self._lr
# Returns the training operation defined for this model
@property
def train_op(self):
return self._train_op
Explanation: We learned how the model is build step by step. Noe, let's then create a Class that represents our model. This class needs a few things:
- We have to create the model in accordance with our defined hyperparameters
- We have to create the placeholders for our input data and expected outputs (the real data)
- We have to create the LSTM cell structure and connect them with our RNN structure
- We have to create the word embeddings and point them to the input data
- We have to create the input structure for our RNN
- We have to instanciate our RNN model and retrieve the variable in which we should expect our outputs to appear
- We need to create a logistic structure to return the probability of our words
- We need to create the loss and cost functions for our optimizer to work, and then create the optimizer
- And finally, we need to create a training operation that can be run to actually train our model
End of explanation
##########################################################################################################################
# run_epoch takes as parameters the current session, the model instance, the data to be fed, and the operation to be run #
##########################################################################################################################
def run_epoch(session, m, data, eval_op, verbose=False):
#Define the epoch size based on the length of the data, batch size and the number of steps
epoch_size = ((len(data) // m.batch_size) - 1) // m.num_steps
start_time = time.time()
costs = 0.0
iters = 0
#state = m.initial_state.eval()
#m.initial_state = tf.convert_to_tensor(m.initial_state)
#state = m.initial_state.eval()
state = session.run(m.initial_state)
#For each step and data point
for step, (x, y) in enumerate(reader.ptb_iterator(data, m.batch_size, m.num_steps)):
#Evaluate and return cost, state by running cost, final_state and the function passed as parameter
cost, state, _ = session.run([m.cost, m.final_state, eval_op],
{m.input_data: x,
m.targets: y,
m.initial_state: state})
#Add returned cost to costs (which keeps track of the total costs for this epoch)
costs += cost
#Add number of steps to iteration counter
iters += m.num_steps
if verbose and (step % 10) == 0:
print("({:.2%}) Perplexity={:.3f} Speed={:.0f} wps".format(
step * 1.0 / epoch_size,
np.exp(costs / iters),
iters * m.batch_size / (time.time() - start_time))
)
# Returns the Perplexity rating for us to keep track of how the model is evolving
return np.exp(costs / iters)
Explanation: With that, the actual structure of our Recurrent Neural Network with Long Short-Term Memory is finished. What remains for us to do is to actually create the methods to run through time -- that is, the run_epoch method to be run at each epoch and a main script which ties all of this together.
What our run_epoch method should do is take our input data and feed it to the relevant operations. This will return at the very least the current result for the cost function.
End of explanation
# Reads the data and separates it into training data, validation data and testing data
raw_data = reader.ptb_raw_data(data_dir)
train_data, valid_data, test_data, _ = raw_data
#Initializes the Execution Graph and the Session
with tf.Graph().as_default(), tf.Session() as session:
initializer = tf.random_uniform_initializer(-init_scale,init_scale)
# Instantiates the model for training
# tf.variable_scope add a prefix to the variables created with tf.get_variable
with tf.variable_scope("model", reuse=None, initializer=initializer):
m = PTBModel(is_training=True)
# Reuses the trained parameters for the validation and testing models
# They are different instances but use the same variables for weights and biases,
# they just don't change when data is input
with tf.variable_scope("model", reuse=True, initializer=initializer):
mvalid = PTBModel(is_training=False)
mtest = PTBModel(is_training=False)
#Initialize all variables
tf.global_variables_initializer().run()
# Set initial learning rate
m.assign_lr(session=session, lr_value=learning_rate)
for i in range(max_max_epoch):
print("Epoch %d : Learning rate: %.3f" % (i + 1, session.run(m.lr)))
# Run the loop for this epoch in the training model
train_perplexity = run_epoch(session, m, train_data, m.train_op,
verbose=True)
print("Epoch %d : Train Perplexity: %.3f" % (i + 1, train_perplexity))
# Run the loop for this epoch in the validation model
valid_perplexity = run_epoch(session, mvalid, valid_data, tf.no_op())
print("Epoch %d : Valid Perplexity: %.3f" % (i + 1, valid_perplexity))
# Define the decay for the next epoch
lr_decay = decay * ((max_max_epoch - i) / max_max_epoch)
# Set the decayed learning rate as the learning rate for the next epoch
m.assign_lr(session, learning_rate * lr_decay)
# Run the loop in the testing model to see how effective was our training
test_perplexity = run_epoch(session, mtest, test_data, tf.no_op())
print("Test Perplexity: %.3f" % test_perplexity)
Explanation: Now, we create the main method to tie everything together. The code here reads the data from the directory, using the reader helper module, and then trains and evaluates the model on both a testing and a validating subset of data.
End of explanation |
511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Web-Scraping
Sous ce nom se cache une pratique très utile pour toute personne souhaitant travailler sur des informations disponibles en ligne, mais n'existant pas forcément sous la forme d'un tableau Excel ...
Le webscraping est une technique d'extraction du contenu des sites internet, via un programme informatique
Step1: Un détour par le Web
Step2: 1ere page HTML
On va commencer facilement, prenons une page wikipedia, par exemple celle de la Ligue 1 de football
Step3: Si on print l'objet, page créée avec BeautifulSoup, on voit que ce n'est plus une chaine de caractères mais bien une page HTML avec des balises. On peut à présenter chercher des élements à l'intérieur de ces balises.
par exemple, si on veut connaire le titre de la page, on utilise la méthode .find et on lui demande "title"
Step4: la methode .find ne renvoie que la première occurence de l'élément
Step5: Pour trouver toutes les occurences, on utilise .findAll()
Step6: Exercice guidé
Step7: On n'a pas envie de prendre le premier élément qui ne correspond pas à un club mais à une image.
Or cet élément est le seul qui n'ait pas de title = "".
Il est conseillé d'exclure les élements qui ne nous intéressent pas en indiquant les éléments que la ligne doit avoir au lieu de les exclure en fonction de leur place dans la liste
Step8: Enfin la dernière étape, consiste à obtenir les informations souhaitées, c'est à dire dans notre cas, le nom et l'url des 20 clubs.
Pour cela, nous allons utiliser deux méthodes de l'élement item
Step9: Toutes ces informations, on souhaite les conserver dans un tableau Excel pour pouvoir les réuitiliser à l'envie
Step10: Exercice de web scraping avec BeautifulSoup
Pour cet exercice, nous vous demandons d'obtenir 1) les informations personnelles des 721 pokemons sur le site internet http
Step11: Obtenir des informations datant de moins d'une heure sur Google News
Step12: Obtenir des nouvelles sur un sujet entre deux dates données
En réalité, l'exemple de Google News aurait pu se passer de Selenium et être utilisé directement avec BeautifulSoup et les url qu'on réussit à deviner de Google.
Ici, on utilise l'url de Google News pour créer une petite fonction qui donne pour chaque ensemble de (sujet, debut d'une période, fin d'une période) des liens pertinents issus de la recherche Google.
Step13: Utiliser selenium pour jouer à 2048
Dans cet exemple, on utilise le module pour que python appuie lui même sur les touches du clavier afin de jouer à 2048.
Note | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Web-Scraping
Sous ce nom se cache une pratique très utile pour toute personne souhaitant travailler sur des informations disponibles en ligne, mais n'existant pas forcément sous la forme d'un tableau Excel ...
Le webscraping est une technique d'extraction du contenu des sites internet, via un programme informatique : nous allons aujourd'hui vous présenter comme créer et exécuter ces robots afin de recupérer rapidement des informations utiles à vos projets actuels ou futurs.
End of explanation
import urllib
import bs4
#help(bs4)
Explanation: Un détour par le Web : comment fonctionne un site ?
Même si nous n'allons pas aujourd'hui faire un cours de web, il vous faut néanmoins certaines bases pour comprendre comment un site internet fonctionne et comment sont structurées les informations sur une page.
Un site Web est un ensemble de pages codées en HTML qui permet de décrire à la fois le contenu et la forme d'une page Web.
HTML
Les balises
Sur une page web, vous trouverez toujours à coup sûr des éléments comme < head>, < title>, etc. Il s'agit des codes qui vous permettent de structurer le contenu d'une page HTML et qui s'appellent des balises.
Citons, par exemple, les balises < p>, < h1>, < h2>, < h3>, < strong> ou < em>.
Le symbole < > est une balise : il sert à indiquer le début d'une partie. Le symbole <\ > indique la fin de cette partie.
La plupart des balises vont par paires, avec une «balise ouvrante» et une «balise fermante». (par exemple < p> et < /p>).
Exemple : les balise des tableaux
$$\begin{array}{rr} \hline
Balise & \text{Description} \ \hline
< table> & \text{Tableau} \
< caption>& \text{Titre du tableau} \
< tr> & \text{Ligne de tableau} \
< th> & \text{Cellule d'en-tête}\
< td> & \text{Cellule} \
< thead> & \text{Section de l'en-tête du tableau} \
< tbody> & \text{Section du corps du tableau} \
< tfoot> & \text{Section du pied du tableau} \
\end{array}$$
Application : un tableau en HTML
Le code HTML du tableau suivant
Donnera dans le navigateur
$$\begin{array}{rrr}
Prénom & Mike & Mister \
Nom & Stuntman & Pink \
Profession & Cascadeur & Gangster \
\end{array}$$
Parent et enfant
Dans le cadre du langage HTML, les termes de parents (parent) et enfants (child) servent à désigner des élements emboîtés les uns dans les autres.
Dans la construction suivante, par exemple :
On dira que l'élément < div> est le parent de l'élément < p> tandis que l'élément < p> est l'enfant de l'élément < div>.
Mais pourquoi apprendre ça pour scraper me direz-vous ?
Pour bien récupérer les informations d'un site internet, il faut pouvoir comprendre sa structure et donc son code HTML. Les fonctions python qui servent au scrapping sont principalement construites pour vous permettre de naviguer entre les balises
Optionnel - CSS - le style de la page WEB
Quand le bout de code html est écrit, il apaprait sous la forme d'un texte noir sur un fond blanc. Une manière simple de rendre la page plus belle, c'est d'y ajouter de la couleur.
La feuille de style qui permet de rendre la page plus belle correspond au(x) fichier(s) CSS.
Toutes les pages HTML qui font référence à cette feuille de style externe hériteront de toutes ses définitions.
Nous y reviendrons plus en détail dans le TD sur Flask (module Python de création de site internet).
Scrapper avec python
Nous allons essentiellement utiliser le package BeautifulSoup4 pour ce cours, mais d'autres packages existent (Selenium, Scrapy...).
BeautifulSoup sera suffisant quand vous voudrez travailler sur des pages HTML statiques, dès que les informations que vous recherchez sont générées via l'exécution de scripts Javascipt, il vous faudra passer par des outils comme Selenium.
De même, si vous ne connaissez pas l'URL, il faudra passer par un framework comme Scrapy, qui passe facilement d'une page à une autre ("crawl"). Scrapy est plus complexe à manipuler que BeautifulSoup : si vous voulez plus de détails, rendez-vous sur la page du tutorial https://doc.scrapy.org/en/latest/intro/tutorial.html.
Utiliser BeautifulSoup
Les packages pour scrapper des pages HTML :
- BeautifulSoup (pip install bs4)
- urllib
End of explanation
# Etape 1 : se connecter à la page wikipedia et obtenir le code source
url_ligue_1 = "https://fr.wikipedia.org/wiki/Championnat_de_France_de_football_2016-2017"
from urllib import request
request_text = request.urlopen(url_ligue_1).read()
print(request_text[:1000])
# Etape 2 : utiliser le package BeautifulSoup
# qui "comprend" les balises contenues dans la chaine de caractères renvoyée par la fonction request
page = bs4.BeautifulSoup(request_text, "lxml")
#print(page)
Explanation: 1ere page HTML
On va commencer facilement, prenons une page wikipedia, par exemple celle de la Ligue 1 de football :
https://fr.wikipedia.org/wiki/Championnat_de_France_de_football_2016-2017
On va souhaiter récupérer la liste des équipes, ainsi que les url des pages Wikipedia de ces équipes.
End of explanation
print(page.find("title"))
Explanation: Si on print l'objet, page créée avec BeautifulSoup, on voit que ce n'est plus une chaine de caractères mais bien une page HTML avec des balises. On peut à présenter chercher des élements à l'intérieur de ces balises.
par exemple, si on veut connaire le titre de la page, on utilise la méthode .find et on lui demande "title"
End of explanation
print(page.find("table"))
Explanation: la methode .find ne renvoie que la première occurence de l'élément
End of explanation
print("Il y a", len(page.findAll("table")), "éléments dans la page qui sont des <table>")
print(" Le 2eme tableau de la page : Hiérarchie \n", page.findAll("table")[1])
print("--------------------------------------------------------")
print("Le 3eme tableau de la page : Palmarès \n",page.findAll("table")[2])
Explanation: Pour trouver toutes les occurences, on utilise .findAll()
End of explanation
for item in page.find('table', {'class' : 'DebutCarte'}).findAll({'a'})[0:5] :
print(item, "\n-------")
Explanation: Exercice guidé : obtenir la liste des équipes de Ligue 1
La liste des équipes est dans le tableau "Participants" : dans le code source, on voit que ce tableau est celui qui a class = "DebutCarte"
On voit également que les balises qui encerclent les noms et les urls des clubs sont de la forme suivante
End of explanation
### condition sur la place dans la liste >>>> MAUVAIS
for e, item in enumerate(page.find('table', {'class' : 'DebutCarte'}).findAll({'a'})[0:5]) :
if e == 0:
pass
else :
print(item)
#### condition sur les éléments que doit avoir la ligne >>>> BIEN
for item in page.find('table', {'class' : 'DebutCarte'}).findAll({'a'})[0:5] :
if item.get("title") :
print(item)
Explanation: On n'a pas envie de prendre le premier élément qui ne correspond pas à un club mais à une image.
Or cet élément est le seul qui n'ait pas de title = "".
Il est conseillé d'exclure les élements qui ne nous intéressent pas en indiquant les éléments que la ligne doit avoir au lieu de les exclure en fonction de leur place dans la liste
End of explanation
for item in page.find('table', {'class' : 'DebutCarte'}).findAll({'a'})[0:5] :
if item.get("title") :
print(item.get("href"))
print(item.getText())
# pour avoir le nom officiel, on aurait utiliser l'élément <title>
for item in page.find('table', {'class' : 'DebutCarte'}).findAll({'a'})[0:5] :
if item.get("title") :
print(item.get("title"))
Explanation: Enfin la dernière étape, consiste à obtenir les informations souhaitées, c'est à dire dans notre cas, le nom et l'url des 20 clubs.
Pour cela, nous allons utiliser deux méthodes de l'élement item :
- getText() qui permet d'obtenir le texte qui est sur la page web et dans la balise < a>
- get('xxxx') qui permet d'obtenir l'élément qui est égal à xxxx
Dans notre cas, nous allons vouloir le nom du club ainsi que l'url : on va donc utiliser getText et get("href")
End of explanation
import pandas
liste_noms = []
liste_urls = []
for item in page.find('table', {'class' : 'DebutCarte'}).findAll({'a'}) :
if item.get("title") :
liste_urls.append(item.get("href"))
liste_noms.append(item.getText())
df = pandas.DataFrame.from_dict( {"clubs" : liste_noms, 'url' : liste_urls})
df.head()
Explanation: Toutes ces informations, on souhaite les conserver dans un tableau Excel pour pouvoir les réuitiliser à l'envie : pour cela, rien de plus simple, on va passer par pandas, parce qu'on le maitrise parfaitement à ce stade de la formation.
End of explanation
import selenium #pip install selenium
# télécharger le chrome driver http://chromedriver.storage.googleapis.com/index.html?path=2.24/
path_to_web_driver = "./chromedriver"
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser = webdriver.Chrome(path_to_web_driver)
browser.get('https://news.google.com/')
# on cherche l'endroit où on peut remplir un formulaire en utilisant les outils du navigateur > inspecter les éléments de la page
# on voit que la barre de recherche est un élement du code appelé 'q' comme query
# on lui demande de chercher cet élément
search = browser.find_element_by_name('q')
# on envoie à cet endroit le mot qu'on aurait tapé dans la barre de recherche
search.send_keys("alstom")
# on appuie sur le bouton "Entrée" Return en anglais
search.send_keys(Keys.RETURN)
links = browser.find_elements_by_xpath("//h3[@class='r _U6c']/a[@href]")
results = []
for link in links:
url = link.get_attribute('href')
results.append(url)
### on a une pause de 10 secondes pour aller voir ce qui se passe sur la page internet
time.sleep(10)
# on demande de quitter le navigateur quand tout est fini
browser.quit()
print(results)
Explanation: Exercice de web scraping avec BeautifulSoup
Pour cet exercice, nous vous demandons d'obtenir 1) les informations personnelles des 721 pokemons sur le site internet http://pokemondb.net/pokedex/national
Les informations que nous aimerions obtenir au final pour les pokemons sont celles contenues dans 4 tableaux :
- Pokédex data
- Training
- Breeding
- Base stats
Pour exemple : http://pokemondb.net/pokedex/nincada
2) Nous aimerions que vous récupériez également les images de chacun des pokémons et que vous les enregistriez dans un dossier (indice : utilisez les modules request et shutil)
pour cette question ci, il faut que vous cherchiez de vous même certains éléments, tout n'est pas présent dans le TD
Aller sur internet avec Selenium
L'avantage du package Selenium est d'obtenir des informations du site qui ne sont pas dans le code html mais qui apparaissent uniquement à la suite de l'exécution de script javascript en arrière plan.
Selenium se comporte comme un utilisateur qui surfe sur internet : il clique sur des liens, il remplit des formulaires etc.
Dans cet exemple, nous allons essayer de aller sur le site de Google Actualités et entrer dans la barre de recherche un sujet donné.
End of explanation
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser = webdriver.Chrome(path_to_web_driver)
browser.get('https://news.google.com/')
search = browser.find_element_by_name('q')
# on envoie à cet endroit le mot qu'on aurait tapé dans la barre de recherche
search.send_keys("alstom")
# on appuie sur le bouton "Rechercher"
search.send_keys(Keys.RETURN)
#pour obtenir le lien vers les articles d'il y a moins d'une heure :
# on utilise ce qu'on a trouvé dans le code source à savoir l'url pour les articles de moins d'une heure
link = browser.find_element_by_xpath("//li[@id='qdr_h']/a[@href]").get_attribute('href')
print(link)
browser.get(link)
links = browser.find_elements_by_xpath("//h3[@class='r _U6c']/a[@href]")
results = []
for link in links:
url = link.get_attribute('href')
results.append(url)
#################################"
#print(results)
#time.sleep(5)
browser.quit()
print(results)
Explanation: Obtenir des informations datant de moins d'une heure sur Google News
End of explanation
import time
from selenium import webdriver
def get_news_specific_dates (beg_date, end_date, subject, hl = "fr", gl = "fr", tbm = "nws", authuser = "0") :
'''Permet d obtenir pour une requete donnée et un intervalle temporel
précis les 10 premiers résultats
d articles de presse parus sur le sujet'''
get_string = 'https://www.google.com/search?hl={}&gl={}&tbm={}&authuser={}&q={}&tbs=cdr%3A1%2Ccd_min%3A{}%2Ccd_max%3A{}&tbm={}'.format(hl,gl,tbm,authuser,subject,beg_date,end_date,tbm)
browser.get(get_string)
links = browser.find_elements_by_xpath("//h3[@class='r _U6c']/a[@href]")
results = []
for link in links:
url = link.get_attribute('href')
results.append(url)
browser.quit()
return results
### On appelle la fonction créée à l'instant
browser = webdriver.Chrome(path_to_web_driver)
articles_mai_2015 = get_news_specific_dates("01/05/2015","31/05/2015","société générale jerome kerviel",hl="fr")
print(articles_mai_2015)
Explanation: Obtenir des nouvelles sur un sujet entre deux dates données
En réalité, l'exemple de Google News aurait pu se passer de Selenium et être utilisé directement avec BeautifulSoup et les url qu'on réussit à deviner de Google.
Ici, on utilise l'url de Google News pour créer une petite fonction qui donne pour chaque ensemble de (sujet, debut d'une période, fin d'une période) des liens pertinents issus de la recherche Google.
End of explanation
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
# on ouvre la page internet du jeu 2048
browser = webdriver.Chrome(path_to_web_driver)
browser.get('https://gabrielecirulli.github.io/2048/')
# Ce qu'on va faire : une boucle qui répète inlassablement la même chose : haut / droite / bas / gauche
# on commence par cliquer sur la page pour que les touches sachent
browser.find_element_by_class_name('grid-container').click()
grid = browser.find_element_by_tag_name('body')
# pour savoir quels coups faire à quel moment, on crée un dictionnaire
direction = {0: Keys.UP, 1: Keys.RIGHT, 2: Keys.DOWN, 3: Keys.LEFT}
count = 0
while True:
try: # on vérifie que le bouton "Try again" n'est pas là - sinon ça veut dire que le jeu est fini
retryButton = browser.find_element_by_link_text('Try again')
scoreElem = browser.find_element_by_class_name('score-container')
break
except:
#Do nothing. Game is not over yet
pass
# on continue le jeu - on appuie sur la touche suivante pour le coup d'après
count += 1
grid.send_keys(direction[count % 4])
time.sleep(0.1)
print('Score final : {} en {} coups'.format(scoreElem.text, count))
browser.quit()
Explanation: Utiliser selenium pour jouer à 2048
Dans cet exemple, on utilise le module pour que python appuie lui même sur les touches du clavier afin de jouer à 2048.
Note : ce bout de code ne donne pas une solution à 2048, il permet juste de voir ce qu'on peut faire avec selenium
End of explanation |
512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
Step12: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Project3에서의 모델은 너무 학습이 느렸음. 우리는 이럴 때 다양한 해결책을 강구할 수 있지만, 기본은 data를 다시 한번 살펴보는 것이다.
Step36: 아래의 결과를 보면 첫번째 요소의 수가 18이나 되는데, 우리의 network는 위의 구조와 같이 하나의 큰 input element가 다른 모든 input element의 요소를 dominant하고, 또한 모든 hidden layer unit들에게 영향을 미치는 구조다.
Step37: 심지어 위의 벡터에서 18은 '' 같은 아무 의미 없는 값이다. 아마도 다른 경우도 띄어쓰기나 조사같은 의미없는 값이 많을 것이다.
Step38: Dominant한 값들은 대부분 ' ', '', 'the' 같은 단어임. 즉, 단순한 count는 data가 가진 signal을 highlight 해주지 않는다.
이는 단순한 count는 noise를 많이 내포하고 있음을 의미한다.
Step42: Project 4
Step43: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
만일 이것을 위의 Project 3 에서처럼 learning rate 같은 것으로 발전시키려고 했다면 아주 고생하고 별 성과도 없었을 것.
하지만 데이터와 네트워크의 구조를 보면서 접근하면 아주 빠르게 모델을 발전시킬 수 있음.
Step44: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step45: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step46: Project 4 에서 개선한 네트워크도 사실 학습 속도는 매우 느린 편인데, 그 이유는 위의 그림처럼 대부분의 값이 sparse 하기 때문인 것으로 생각됨.
Step47: 위에서의 과정을 통해 알 수 있는 것은 우리가 이제까지 사용했던 matrix multiplication 과정이 사실상 그냥 일부 index의 값을 더한 것일 뿐이라는 것이다. 즉, sparse 한 네트워크에서 굳이 대부분이 곱하고 더하는 연산을 하는 과정을 단순히 몇 개 숫자의 합을 구하는 연산으로 간추릴 수 있다는 것.
Step51: Project 5
Step52: Run the following cell to recreate the network and train it once again.
Step53: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step54: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
Step58: 위의 통계들을 보면, 필요없는 중립적인 단어가 매우 많은데, 이런 것들을 잘라냄으로써 더 중요한 자료에 집중하고, 필요없는 연산의 횟수를 줄일 수 있다. 또 많이 사용되지 않는 단어들을 제거함으로써 패턴에 영향을 덜 끼치는 아웃라이어도 줄일 수 있다.
Project 6
Step59: Run the following cell to train your network with a small polarity cutoff.
Step60: And run the following cell to test it's performance. It should be
Step61: Run the following cell to train your network with a much larger polarity cutoff.
이 경우 속도는 7배 정도 빨라지고, 정확도는 3% 정도 떨어졌는데, 실제로 문제를 푸는 경우 나쁘지 않은 trade-off.
실제 문제 중에서 training data가 아주 많은 경우, 속도를 높이는 것이 중요하기 때문.
Step62: And run the following cell to test it's performance.
Step63: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis
Step64: 위의 두 결과를 보면 network가 서로 비슷한 단어들을 잘 detect함. 즉, 제대로 학습되었음을 볼 수 있음. | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem" (this lesson)
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network (video only - nothing in notebook)
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset<a id='lesson_1'></a>
The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.
End of explanation
len(reviews)
reviews[0]
labels[0]
Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a>
데이터를 들여다보면서 가설을 세워보자
End of explanation
from collections import Counter
import numpy as np
Explanation: Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library.
End of explanation
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
End of explanation
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
words = reviews[i].split(' ')
if (labels[i] == 'POSITIVE'):
for word in words:
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in words:
negative_counts[word] += 1
total_counts[word] += 1
Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.
End of explanation
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
End of explanation
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for word, cnt in total_counts.most_common():
if(cnt >= 100):
pos_neg_ratios[word] = positive_counts[word] / float(negative_counts[word] + 1)
Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the ratios you've calculated for a few words:
End of explanation
# TODO: Convert ratios to logs
for word, ratio in pos_neg_ratios.items():
if(ratio >= 1.):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log(1 / (ratio + 0.01))
Explanation: Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio))
In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the new ratios you've calculated for the same words from before:
End of explanation
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
End of explanation
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())
Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary.
End of explanation
vocab_size = len(vocab)
print(vocab_size)
Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
End of explanation
from IPython.display import Image
Image(filename='sentiment_network_2.png')
Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
End of explanation
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1, vocab_size))
Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.
End of explanation
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
Explanation: Run the following cell. It should display (1, 74074)
End of explanation
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
End of explanation
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(' '):
layer_0[0][word2index[word]] += 1
Explanation: TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0.
End of explanation
update_input_layer(reviews[0])
layer_0
Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
End of explanation
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
if(label == 'POSITIVE'):
return 1
else:
return 0
Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively.
End of explanation
labels[0]
get_target_for_label(labels[0])
Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
End of explanation
labels[1]
get_target_for_label(labels[1])
Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
End of explanation
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for word in review.split(' '):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
layer_2_error = self.get_target_for_label(label) - layer_2
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 += self.learning_rate * np.dot(layer_1.T, layer_2_delta)
self.weights_0_1 += self.learning_rate * np.dot(self.layer_0.T, layer_1_delta)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2 >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
End of explanation
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
End of explanation
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Project3에서의 모델은 너무 학습이 느렸음. 우리는 이럴 때 다양한 해결책을 강구할 수 있지만, 기본은 data를 다시 한번 살펴보는 것이다.
End of explanation
layer_0
Explanation: 아래의 결과를 보면 첫번째 요소의 수가 18이나 되는데, 우리의 network는 위의 구조와 같이 하나의 큰 input element가 다른 모든 input element의 요소를 dominant하고, 또한 모든 hidden layer unit들에게 영향을 미치는 구조다.
End of explanation
list(vocab)[0]
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
Explanation: 심지어 위의 벡터에서 18은 '' 같은 아무 의미 없는 값이다. 아마도 다른 경우도 띄어쓰기나 조사같은 의미없는 값이 많을 것이다.
End of explanation
review_counter.most_common()
Explanation: Dominant한 값들은 대부분 ' ', '', 'the' 같은 단어임. 즉, 단순한 count는 data가 가진 signal을 highlight 해주지 않는다.
이는 단순한 count는 noise를 많이 내포하고 있음을 의미한다.
End of explanation
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
self.layer_0 *= 0
# JUST SET CORRESPONDENT ELEMENT
for word in review.split(' '):
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
self.update_input_layer(review)
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
layer_2_error = self.get_target_for_label(label) - layer_2
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 += self.learning_rate * np.dot(layer_1.T, layer_2_delta)
self.weights_0_1 += self.learning_rate * np.dot(self.layer_0.T, layer_1_delta)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
layer_1 = np.dot(self.layer_0, self.weights_0_1)
layer_2 = self.sigmoid(np.dot(layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2 >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
만일 이것을 위의 Project 3 에서처럼 learning rate 같은 것으로 발전시키려고 했다면 아주 고생하고 별 성과도 없었을 것.
하지만 데이터와 네트워크의 구조를 보면서 접근하면 아주 빠르게 모델을 발전시킬 수 있음.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse.png')
Explanation: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Explanation: Project 4 에서 개선한 네트워크도 사실 학습 속도는 매우 느린 편인데, 그 이유는 위의 그림처럼 대부분의 값이 sparse 하기 때문인 것으로 생각됨.
End of explanation
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
Explanation: 위에서의 과정을 통해 알 수 있는 것은 우리가 이제까지 사용했던 matrix multiplication 과정이 사실상 그냥 일부 index의 값을 더한 것일 뿐이라는 것이다. 즉, sparse 한 네트워크에서 굳이 대부분이 곱하고 더하는 연산을 하는 과정을 단순히 몇 개 숫자의 합을 구하는 연산으로 간추릴 수 있다는 것.
End of explanation
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
for review in reviews:
for word in review.split(' '):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.layer_1 = np.zeros((1, self.hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(' '):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
layer_2_error = self.get_target_for_label(label) - layer_2
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 += self.learning_rate * np.dot(self.layer_1.T, layer_2_delta)
for index in review:
self.weights_0_1[index] += self.learning_rate * layer_1_delta[0]
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function.
layer_0 = set()
for word in review.lower().split(' '):
if(word in self.word2index.keys()):
layer_0.add(self.word2index[word])
self.layer_1 *= 0
for index in layer_0:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2 >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Remove the update_input_layer function - you will not need it in this version.
* Modify init_network:
You no longer need a separate input layer, so remove any mention of self.layer_0
You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify train:
Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.
At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.
When updating weights_0_1, only update the individual weights that were used in the forward pass.
Modify run:
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it once again.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
Explanation: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
End of explanation
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, min_count = 10, polarity_cutoff = 0.1, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count -
polarity_cutoff -
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
review_vocab = set()
for review in reviews:
for word in review.split(' '):
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if(pos_neg_ratios[word] >= polarity_cutoff or pos_neg_ratios[word] <= -polarity_cutoff):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes, self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.layer_1 = np.zeros((1, self.hidden_nodes))
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(' '):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
# TODO: Implement the forward pass through the network.
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: Implement the back propagation pass here.
layer_2_error = self.get_target_for_label(label) - layer_2
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
layer_1_error = np.dot(layer_2_delta, self.weights_1_2.T)
layer_1_delta = layer_1_error
self.weights_1_2 += self.learning_rate * np.dot(self.layer_1.T, layer_2_delta)
for index in review:
self.weights_0_1[index] += self.learning_rate * layer_1_delta[0]
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function.
layer_0 = set()
for word in review.lower().split(' '):
if(word in self.word2index.keys()):
layer_0.add(self.word2index[word])
self.layer_1 *= 0
for index in layer_0:
self.layer_1 += self.weights_0_1[index]
layer_2 = self.sigmoid(np.dot(self.layer_1, self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(layer_2 >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: 위의 통계들을 보면, 필요없는 중립적인 단어가 매우 많은데, 이런 것들을 잘라냄으로써 더 중요한 자료에 집중하고, 필요없는 연산의 횟수를 줄일 수 있다. 또 많이 사용되지 않는 단어들을 제거함으로써 패턴에 영향을 덜 끼치는 아웃라이어도 줄일 수 있다.
Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
Add two additional parameters: min_count and polarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
Modify __init__:
Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a small polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance. It should be
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a much larger polarity cutoff.
이 경우 속도는 7배 정도 빨라지고, 정확도는 3% 정도 떨어졌는데, 실제로 문제를 푸는 경우 나쁘지 않은 trade-off.
실제 문제 중에서 training data가 아주 많은 경우, 속도를 높이는 것이 중요하기 때문.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance.
End of explanation
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
Explanation: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a>
End of explanation
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
Explanation: 위의 두 결과를 보면 network가 서로 비슷한 단어들을 잘 detect함. 즉, 제대로 학습되었음을 볼 수 있음.
End of explanation |
513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Note
Step2: Definition of the layers
So let us define the layers for the convolutional net. In general, layers are assembled in a list. Each element of the list is a tuple -- first a Lasagne layer, next a dictionary containing the arguments of the layer. We will explain the layer definitions in a moment, but in general, you should look them up in the Lasagne documenation.
Nolearn allows you to skip Lasagne's incoming keyword, which specifies how the layers are connected. Instead, nolearn will automatically assume that layers are connected in the order they appear in the list.
Note
Step3: Definition of the neural network
We now define the neural network itself. But before we do this, we want to add L2 regularization to the net (see here for more). This is achieved in the little helper function below. If you don't understand exactly what this about, just ignore this.
Step4: Now we initialize nolearn's neural net itself. We will explain each argument shortly
Step5: Training the neural network
To train the net, we call its fit method with our X and y data, as we would with any scikit learn classifier.
Step6: As we set the verbosity to 1, nolearn will print some useful information for us
Step7: Train and validation loss progress
With nolearn's visualization tools, it is possible to get some further insights into the working of the CNN. First of all, we will simply plot the log loss of the training and validation data over each epoch, as shown below
Step8: This kind of visualization can be helpful in determining whether we want to continue training or not. For instance, here we see that both loss functions still are still decreasing and that more training will pay off. This graph can also help determine if we are overfitting
Step9: As can be seen above, in our case, the results are not too interesting. If the weights just look like noise, we might have to do something (e.g. use more filters so that each can specialize better).
Visualizing the layers' activities
To see through the "eyes" of the net, we can plot the activities produced by different layers. The plot_conv_activity function is made for that. The first argument, again, is a layer, the second argument an image in the bc01 format (which is why we use X[0
Step10: Here we can see that depending on the learned filters, the neural net represents the image in different ways, which is what we should expect. If, e.g., some images were completely black, that could indicate that the corresponding filters have not learned anything useful. When you find yourself in such a situation, training longer or initializing the weights differently might do the trick.
Plot occlusion images
A possibility to check if the net, for instance, overfits or learns important features is to occlude part of the image. Then we can check whether the net still makes correct predictions. The idea behind that is the following
Step11: Here we see which parts of the number are most important for correct classification. We ses that the critical parts are all directly above the numbers, so this seems to work out. For more complex images with different objects in the scene, this function should be more useful, though.
Finding a good architecture
This section tries to help you go deep with your convolutional neural net. To do so, one cannot simply increase the number of convolutional layers at will. It is important that the layers have a sufficiently high learning capacity while they should cover approximately 100% of the incoming image (Xudong Cao, 2015).
The usual approach is to try to go deep with convolutional layers. If you chain too many convolutional layers, though, the learning capacity of the layers falls too low. At this point, you have to add a max pooling layer. Use too many max pooling layers, and your image coverage grows larger than the image, which is clearly pointless. Striking the right balance while maximizing the depth of your layer is the final goal.
It is generally a good idea to use small filter sizes for your convolutional layers, generally <b>3x3</b>. The reason for this is that this allows to cover the same receptive field of the image while using less parameters that would be required if a larger filter size were used. Moreover, deeper stacks of convolutional layers are more expressive (see here for more).
Step12: A shallow net
Let us try out a simple architecture and see how we fare.
Step13: To see information about the capacity and coverage of each layer, we need to set the verbosity of the net to a value of 2 and then initialize the net. We next pass the initialized net to PrintLayerInfo to see some useful information. By the way, we could also just call the fit method of the net to get the same outcome, but since we don't want to fit just now, we proceed as shown below.
Step14: This net is fine. The capacity never falls below 1/6, which would be 16.7%, and the coverage of the image never exceeds 100%. However, with only 4 convolutional layers, this net is not very deep and will properly not achieve the best possible results.
What we also see is the role of max pooling. If we look at 'maxpool2d1', after this layer, the capacity of the net is increased. Max pooling thus helps to increase capacity should it dip too low. However, max pooling also significantly increases the coverage of the image. So if we use max pooling too often, the coverage will quickly exceed 100% and we cannot go sufficiently deep.
Too little maxpooling
Now let us try an architecture that uses a lot of convolutional layers but only one maxpooling layer.
Step15: Here we have a very deep net but we have a problem
Step16: This net uses too much maxpooling for too small an image. The later layers, colored in cyan, would cover more than 100% of the image. So this network is clearly also suboptimal.
A good compromise
Now let us have a look at a reasonably deep architecture that satisfies the criteria we set out to meet
Step17: With 10 convolutional layers, this network is rather deep, given the small image size. Yet the learning capacity is always suffiently large and never are is than 100% of the image covered. This could just be a good solution. Maybe you would like to give this architecture a spin?
Note 1 | Python Code:
import os
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from lasagne.layers import DenseLayer
from lasagne.layers import InputLayer
from lasagne.layers import DropoutLayer
from lasagne.layers import Conv2DLayer
from lasagne.layers import MaxPool2DLayer
from lasagne.nonlinearities import softmax
from lasagne.updates import adam
from lasagne.layers import get_all_params
from nolearn.lasagne import NeuralNet
from nolearn.lasagne import TrainSplit
from nolearn.lasagne import objective
Explanation: Tutorial: Training convolutional neural networks with nolearn
Author: Benjamin Bossan
This tutorial's goal is to teach you how to use nolearn to train convolutional neural networks (CNNs). The nolearn documentation can be found here. We assume that you have some general knowledge about machine learning in general or neural nets specifically, but want to learn more about convolutional neural networks and nolearn.
We well cover several points in this notebook.
How to load image data such that we can use it for our purpose. For this tutorial, we will use the MNIST data set, which consists of images of the numbers from 0 to 9.
How to properly define layers of the net. A good choice of layers, i.e. a good network architecture, is most important to get nice results out of a neural net.
The definition of the neural network itself. Here we define important hyper-parameters.
Next we will see how visualizations may help us to further refine the network.
Finally, we will show you how nolearn can help us find better architectures for our neural network.
Imports
End of explanation
def load_mnist(path):
X = []
y = []
with open(path, 'rb') as f:
next(f) # skip header
for line in f:
yi, xi = line.split(',', 1)
y.append(yi)
X.append(xi.split(','))
# Theano works with fp32 precision
X = np.array(X).astype(np.float32)
y = np.array(y).astype(np.int32)
# apply some very simple normalization to the data
X -= X.mean()
X /= X.std()
# For convolutional layers, the default shape of data is bc01,
# i.e. batch size x color channels x image dimension 1 x image dimension 2.
# Therefore, we reshape the X data to -1, 1, 28, 28.
X = X.reshape(
-1, # number of samples, -1 makes it so that this number is determined automatically
1, # 1 color channel, since images are only black and white
28, # first image dimension (vertical)
28, # second image dimension (horizontal)
)
return X, y
# here you should enter the path to your MNIST data
#path = os.path.join(os.path.expanduser('~'), 'data/mnist/train.csv')
#X, y = load_mnist(path)
from sklearn.datasets import load_digits
d=load_digits()
X=d.images
y=d.target
figs, axes = plt.subplots(4, 4, figsize=(6, 6))
for i in range(4):
for j in range(4):
axes[i, j].imshow(-X[i + 4 * j].reshape(28, 28), cmap='gray', interpolation='none')
axes[i, j].set_xticks([])
axes[i, j].set_yticks([])
axes[i, j].set_title("Label: {}".format(y[i + 4 * j]))
axes[i, j].axis('off')
Explanation: Note: If your GPU supports it, you should try using lasagne.cuda_convnet.Conv2DCCLayer and lasagne.cuda_convnet.MaxPool2DCCLayer, which could give you a nice speed up.
Loading MNIST data
This little helper function loads the MNIST data available here.
End of explanation
layers0 = [
# layer dealing with the input data
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
# first stage of our convolutional layers
(Conv2DLayer, {'num_filters': 96, 'filter_size': 5}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': 3}),
(MaxPool2DLayer, {'pool_size': 2}),
# second stage of our convolutional layers
(Conv2DLayer, {'num_filters': 128, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 128, 'filter_size': 3}),
(Conv2DLayer, {'num_filters': 128, 'filter_size': 3}),
(MaxPool2DLayer, {'pool_size': 2}),
# two dense layers with dropout
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
# the output layer
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
Explanation: Definition of the layers
So let us define the layers for the convolutional net. In general, layers are assembled in a list. Each element of the list is a tuple -- first a Lasagne layer, next a dictionary containing the arguments of the layer. We will explain the layer definitions in a moment, but in general, you should look them up in the Lasagne documenation.
Nolearn allows you to skip Lasagne's incoming keyword, which specifies how the layers are connected. Instead, nolearn will automatically assume that layers are connected in the order they appear in the list.
Note: Of course you can manually set the incoming parameter if your neural net's layers are connected differently. To do so, you have to give the corresponding layer a name (e.g. 'name': 'my layer') and use that name as a reference ('incoming': 'my layer').
The layers we use are the following:
InputLayer: We have to specify the shape of the data. For image data, it is batch size x color channels x image dimension 1 x image dimension 2 (aka bc01). Here you should generally just leave the batch size as None, so that it is taken care off automatically. The other dimensions are given by X.
Conv2DLayer: The most important keywords are num_filters and filter_size. The former indicates the number of channels -- the more you choose, the more different filters can be learned by the CNN. Generally, the first convolutional layers will learn simple features, such as edges, while deeper layers can learn more abstract features. Therefore, you should increase the number of filters the deeper you go. The filter_size is the size of the filter/kernel. The current consensus is to always use 3x3 filters, as these allow to cover the same number of image pixels with fewer parameters than larger filters do.
MaxPool2DLayer: This layer performs max pooling and hopefully provides translation invariance. We need to indicate the region over which it pools, with 2x2 being the default of most users.
DenseLayer: This is your vanilla fully-connected layer; you should indicate the number of 'neurons' with the num_units argument. The very last layer is assumed to be the output layer. We thus set the number of units to be the number of classes, 10, and choose softmax as the output nonlinearity, as we are dealing with a classification task.
DropoutLayer: Dropout is a common technique to regularize neural networks. It is almost always a good idea to include dropout between your dense layers.
Apart from these arguments, the Lasagne layers have very reasonable defaults concerning weight initialization, nonlinearities (rectified linear units), etc.
End of explanation
def regularization_objective(layers, lambda1=0., lambda2=0., *args, **kwargs):
# default loss
losses = objective(layers, *args, **kwargs)
# get the layers' weights, but only those that should be regularized
# (i.e. not the biases)
weights = get_all_params(layers[-1], regularizable=True)
# sum of absolute weights for L1
sum_abs_weights = sum([abs(w).sum() for w in weights])
# sum of squared weights for L2
sum_squared_weights = sum([(w ** 2).sum() for w in weights])
# add weights to regular loss
losses += lambda1 * sum_abs_weights + lambda2 * sum_squared_weights
return losses
Explanation: Definition of the neural network
We now define the neural network itself. But before we do this, we want to add L2 regularization to the net (see here for more). This is achieved in the little helper function below. If you don't understand exactly what this about, just ignore this.
End of explanation
net0 = NeuralNet(
layers=layers0,
max_epochs=10,
update=adam,
update_learning_rate=0.0002,
objective=regularization_objective,
objective_lambda2=0.0025,
train_split=TrainSplit(eval_size=0.25),
verbose=1,
)
Explanation: Now we initialize nolearn's neural net itself. We will explain each argument shortly:
* The most important argument is the layers argument, which should be the list of layers defined above.
* max_epochs is simply the number of epochs the net learns with each call to fit (an 'epoch' is a full training cycle using all training data).
* As update, we choose adam, which for many problems is a good first choice as updateing rule.
* The objective of our net will be the regularization_objective we just defined.
* To change the lambda2 parameter of our objective function, we set the objective_lambda2 parameter. The NeuralNetwork class will then automatically set this value. Usually, moderate L2 regularization is applied, whereas L1 regularization is less frequent.
* For 'adam', a small learning rate is best, so we set it with the update_learning_rate argument (nolearn will automatically interpret this argument to mean the learning_rate argument of the update parameter, i.e. adam in our case).
* The NeuralNet will hold out some of the training data for validation if we set the eval_size of the TrainSplit to a number greater than 0. This will allow us to monitor how well the net generalizes to yet unseen data. By setting this argument to 1/4, we tell the net to hold out 25% of the samples for validation.
* Finally, we set verbose to 1, which will result in the net giving us some useful information.
End of explanation
net0.fit(X, y)
Explanation: Training the neural network
To train the net, we call its fit method with our X and y data, as we would with any scikit learn classifier.
End of explanation
from nolearn.lasagne.visualize import plot_loss
from nolearn.lasagne.visualize import plot_conv_weights
from nolearn.lasagne.visualize import plot_conv_activity
from nolearn.lasagne.visualize import plot_occlusion
Explanation: As we set the verbosity to 1, nolearn will print some useful information for us:
First of all, some general information about the net and its layers is printed. Then, during training, the progress will be printed after each epoch.
The train loss is the loss/cost that the net tries to minimize. For this example, this is the log loss (cross entropy).
The valid loss is the loss for the hold out validation set. You should expect this value to indicate how well your model generalizes to yet unseen data.
train/val is simply the ratio of train loss to valid loss. If this value is very low, i.e. if the train loss is much better than your valid loss, it means that the net has probably overfitted the train data.
When we are dealing with a classification task, the accuracy score of the valdation set, valid acc, is also printed.
dur is simply the duration it took to process the given epoch.
In addition to this, nolearn will color the as of yet best train and valid loss, so that it is easy to spot whether the net makes progress.
Visualizations
Diagnosing what's wrong with your neural network if the results are unsatisfying can sometimes be difficult, something closer to an art than a science. But with nolearn's visualization tools, we should be able to get some insights that help us diagnose if something is wrong.
End of explanation
plot_loss(net0)
Explanation: Train and validation loss progress
With nolearn's visualization tools, it is possible to get some further insights into the working of the CNN. First of all, we will simply plot the log loss of the training and validation data over each epoch, as shown below:
End of explanation
plot_conv_weights(net0.layers_[1], figsize=(4, 4))
Explanation: This kind of visualization can be helpful in determining whether we want to continue training or not. For instance, here we see that both loss functions still are still decreasing and that more training will pay off. This graph can also help determine if we are overfitting: If the train loss is much lower than the validation loss, we should probably do something to regularize the net.
Visualizing layer weights
We can further have a look at the weights learned by the net. The first argument of the function should be the layer we want to visualize. The layers can be accessed through the layers_ attribute and then by name (e.g. 'conv2dcc1') or by index, as below. (Obviously, visualizing the weights only makes sense for convolutional layers.)
End of explanation
x = X[0:1]
plot_conv_activity(net0.layers_[1], x)
Explanation: As can be seen above, in our case, the results are not too interesting. If the weights just look like noise, we might have to do something (e.g. use more filters so that each can specialize better).
Visualizing the layers' activities
To see through the "eyes" of the net, we can plot the activities produced by different layers. The plot_conv_activity function is made for that. The first argument, again, is a layer, the second argument an image in the bc01 format (which is why we use X[0:1] instead of just X[0]).
End of explanation
plot_occlusion(net0, X[:5], y[:5])
Explanation: Here we can see that depending on the learned filters, the neural net represents the image in different ways, which is what we should expect. If, e.g., some images were completely black, that could indicate that the corresponding filters have not learned anything useful. When you find yourself in such a situation, training longer or initializing the weights differently might do the trick.
Plot occlusion images
A possibility to check if the net, for instance, overfits or learns important features is to occlude part of the image. Then we can check whether the net still makes correct predictions. The idea behind that is the following: If the most critical part of an image is something like the head of a person, that is probably right. If it is instead a random part of the background, the net probably overfits (see here for more).
With the plot_occlusion function, we can check this. The first argument is the neural net, the second the X data, the third the y data. Be warned that this function can be quite slow for larger images.
End of explanation
from nolearn.lasagne import PrintLayerInfo
Explanation: Here we see which parts of the number are most important for correct classification. We ses that the critical parts are all directly above the numbers, so this seems to work out. For more complex images with different objects in the scene, this function should be more useful, though.
Finding a good architecture
This section tries to help you go deep with your convolutional neural net. To do so, one cannot simply increase the number of convolutional layers at will. It is important that the layers have a sufficiently high learning capacity while they should cover approximately 100% of the incoming image (Xudong Cao, 2015).
The usual approach is to try to go deep with convolutional layers. If you chain too many convolutional layers, though, the learning capacity of the layers falls too low. At this point, you have to add a max pooling layer. Use too many max pooling layers, and your image coverage grows larger than the image, which is clearly pointless. Striking the right balance while maximizing the depth of your layer is the final goal.
It is generally a good idea to use small filter sizes for your convolutional layers, generally <b>3x3</b>. The reason for this is that this allows to cover the same receptive field of the image while using less parameters that would be required if a larger filter size were used. Moreover, deeper stacks of convolutional layers are more expressive (see here for more).
End of explanation
layers1 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 96, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net1 = NeuralNet(
layers=layers1,
update_learning_rate=0.01,
verbose=2,
)
Explanation: A shallow net
Let us try out a simple architecture and see how we fare.
End of explanation
net1.initialize()
layer_info = PrintLayerInfo()
layer_info(net1)
Explanation: To see information about the capacity and coverage of each layer, we need to set the verbosity of the net to a value of 2 and then initialize the net. We next pass the initialized net to PrintLayerInfo to see some useful information. By the way, we could also just call the fit method of the net to get the same outcome, but since we don't want to fit just now, we proceed as shown below.
End of explanation
layers2 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3)}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net2 = NeuralNet(
layers=layers2,
update_learning_rate=0.01,
verbose=2,
)
net2.initialize()
layer_info(net2)
Explanation: This net is fine. The capacity never falls below 1/6, which would be 16.7%, and the coverage of the image never exceeds 100%. However, with only 4 convolutional layers, this net is not very deep and will properly not achieve the best possible results.
What we also see is the role of max pooling. If we look at 'maxpool2d1', after this layer, the capacity of the net is increased. Max pooling thus helps to increase capacity should it dip too low. However, max pooling also significantly increases the coverage of the image. So if we use max pooling too often, the coverage will quickly exceed 100% and we cannot go sufficiently deep.
Too little maxpooling
Now let us try an architecture that uses a lot of convolutional layers but only one maxpooling layer.
End of explanation
layers3 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net3 = NeuralNet(
layers=layers3,
update_learning_rate=0.01,
verbose=2,
)
net3.initialize()
layer_info(net3)
Explanation: Here we have a very deep net but we have a problem: The lack of max pooling layers means that the capacity of the net dips below 16.7%. The corresponding layers are shown in magenta. We need to find a better solution.
Too much maxpooling
Here is an architecture with too mach maxpooling. For illustrative purposes, we set the pad parameter to 1; without it, the image size would shrink below 0, at which point the code will raise an error.
End of explanation
layers4 = [
(InputLayer, {'shape': (None, X.shape[1], X.shape[2], X.shape[3])}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 32, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(Conv2DLayer, {'num_filters': 64, 'filter_size': (3, 3), 'pad': 1}),
(MaxPool2DLayer, {'pool_size': (2, 2)}),
(DenseLayer, {'num_units': 64}),
(DropoutLayer, {}),
(DenseLayer, {'num_units': 64}),
(DenseLayer, {'num_units': 10, 'nonlinearity': softmax}),
]
net4 = NeuralNet(
layers=layers4,
update_learning_rate=0.01,
verbose=2,
)
net4.initialize()
layer_info(net4)
Explanation: This net uses too much maxpooling for too small an image. The later layers, colored in cyan, would cover more than 100% of the image. So this network is clearly also suboptimal.
A good compromise
Now let us have a look at a reasonably deep architecture that satisfies the criteria we set out to meet:
End of explanation
net4.verbose = 3
layer_info(net4)
Explanation: With 10 convolutional layers, this network is rather deep, given the small image size. Yet the learning capacity is always suffiently large and never are is than 100% of the image covered. This could just be a good solution. Maybe you would like to give this architecture a spin?
Note 1: The MNIST images typically don't cover the whole of the 28x28 image size. Therefore, an image coverage of less than 100% is probably very acceptable. For other image data sets such as CIFAR or ImageNet, it is recommended to cover the whole image.
Note 2: This analysis does not tell us how many feature maps (i.e. number of filters per convolutional layer) to use. Here we have to experiment with different values. Larger values mean that the network should learn more types of features but also increase the risk of overfitting (and may exceed the available memory). In general though, deeper layers (those farther down) are supposed to learn more complex features and should thus have more feature maps.
Even more information
It is possible to get more information by increasing the verbosity level beyond 2.
End of explanation |
514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Robot Calibration
Nominal Robot
A nominal robot model
Step1: Real Robots
Real robots do not conform perfectly to the nominal parameters
Small errors in the robot model can generate large errors in Cartesian position
Sources of errors include, but are not limited to
Step2: Get Real (aka Measured) Poses
In real life, these poses would be measured using metrology equipment (e.g., laser tracker, CMM)
Step3: Split Calibration and Validation Measures
A portion of the measured configurations and positions should be set aside for validation after calibration (i.e., optimization)
This is to prevent/check the optimized model for overfitting
Step4: Get Nominal Position Errors
These nominal model is our starting point for calibration
The errors are in millimetres
Step5: Calibration
Step6: Results
A calibrated robot model is never perfect in real life
The goal is often to reduce the max error under a desired threshold | Python Code:
from pybotics.robot import Robot
from pybotics.predefined_models import ur10
nominal_robot = Robot.from_parameters(ur10())
import pandas as pd
def display_robot_kinematics(robot: Robot):
df = pd.DataFrame(robot.kinematic_chain.matrix)
df.columns = ["alpha", "a", "theta", "d"]
display(df)
display_robot_kinematics(nominal_robot)
Explanation: Robot Calibration
Nominal Robot
A nominal robot model:
Represents what the robot manufacturer intended as a kinematic model
Is mathematically ideal
End of explanation
import numpy as np
from copy import deepcopy
real_robot = deepcopy(nominal_robot)
# let's pretend our real robot has small joint offsets
# in real life, this would be a joint mastering issue (level-1 calibration)
# https://en.wikipedia.org/wiki/Robot_calibration
for link in real_robot.kinematic_chain.links:
link.theta += np.random.uniform(
low=np.deg2rad(-0.1),
high=np.deg2rad(0.1)
)
display_robot_kinematics(real_robot)
Explanation: Real Robots
Real robots do not conform perfectly to the nominal parameters
Small errors in the robot model can generate large errors in Cartesian position
Sources of errors include, but are not limited to:
Kinematic errors
Mechanical tolerances
Angle offsets
Non-kinematic errors
Joint stiffness
Gravity
Temperature
Friction
End of explanation
joints = []
positions = []
for i in range(1000):
q = real_robot.random_joints()
pose = real_robot.fk(q)
joints.append(q)
positions.append(pose[:-1,-1])
pd.DataFrame(joints).describe()
pd.DataFrame(positions, columns=['x','y','z']).describe()
Explanation: Get Real (aka Measured) Poses
In real life, these poses would be measured using metrology equipment (e.g., laser tracker, CMM)
End of explanation
from sklearn.model_selection import train_test_split
split = train_test_split(joints, positions, test_size=0.3)
train_joints = split[0]
test_joints = split[1]
train_positions = split[2]
test_positions = split[3]
Explanation: Split Calibration and Validation Measures
A portion of the measured configurations and positions should be set aside for validation after calibration (i.e., optimization)
This is to prevent/check the optimized model for overfitting
End of explanation
from pybotics.optimization import compute_absolute_errors
nominal_errors = compute_absolute_errors(
qs=test_joints,
positions=test_positions,
robot=nominal_robot
)
display(pd.Series(nominal_errors).describe())
Explanation: Get Nominal Position Errors
These nominal model is our starting point for calibration
The errors are in millimetres
End of explanation
from pybotics.optimization import OptimizationHandler
# init calibration handler
handler = OptimizationHandler(nominal_robot)
# set handler to solve for theta parameters
kc_mask_matrix = np.zeros_like(nominal_robot.kinematic_chain.matrix, dtype=bool)
kc_mask_matrix[:,2] = True
display(kc_mask_matrix)
handler.kinematic_chain_mask = kc_mask_matrix.ravel()
from scipy.optimize import least_squares
from pybotics.optimization import optimize_accuracy
# run optimization
result = least_squares(
fun=optimize_accuracy,
x0=handler.generate_optimization_vector(),
args=(handler, train_joints, train_positions),
verbose=2
) # type: scipy.optimize.OptimizeResult
Explanation: Calibration
End of explanation
calibrated_robot = handler.robot
calibrated_errors = compute_absolute_errors(
qs=test_joints,
positions=test_positions,
robot=calibrated_robot
)
display(pd.Series(calibrated_errors).describe())
import matplotlib.pyplot as plt
%matplotlib inline
plt.xscale("log")
plt.hist(nominal_errors, color="C0", label="Nominal")
plt.hist(calibrated_errors, color="C1", label="Calibrated")
plt.legend()
plt.xlabel("Absolute Error [mm]")
plt.ylabel("Frequency")
Explanation: Results
A calibrated robot model is never perfect in real life
The goal is often to reduce the max error under a desired threshold
End of explanation |
515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PIPITS Fungal ITS-dedicated Pipeline
The default pair merge algorithm in vsearch discards 90% of the data. This was observed in other datasets and is believe to be overly conservative. PIPITs offers support for using Pear is a dedicated alternative
Dependencies
|| PIPITS ||
Follow instructions provided at
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: Step 7
Step8: Step 7 | Python Code:
import os
# Provide the directory for your index and read files
ITS = '/home/roli/FORESTs_BHAVYA/WoodsLake/raw_seq/ITS/'
# Provide
datasets = [['ITS',ITS,'ITS.metadata.pipits.Woods.tsv']]
# Ensure your reads files are named accordingly (or modify to suit your needs)
readFile1 = 'read1.fq.gz'
readFile2 = 'read2.fq.gz'
indexFile1 = 'index_read1.fq.gz'
indexFile2 = 'index_read2.fq.gz'
# Example of metadata file
#Index1 Index2 Name
#AATTCAA CATCCGG RG1
#CGCGCAG TCATGGT RG2
#AAGGTCT AGAACCG RG3
#ACTGGAC TGGAATA RG4
## Again, for our pipeline Index1 typically is the reverse complement of the reverse barcode, while Index2 is the forward barcode.
Explanation: PIPITS Fungal ITS-dedicated Pipeline
The default pair merge algorithm in vsearch discards 90% of the data. This was observed in other datasets and is believe to be overly conservative. PIPITs offers support for using Pear is a dedicated alternative
Dependencies
|| PIPITS ||
Follow instructions provided at:
https://github.com/hsgweon/pipits
Note: all dependencies which require 'sudo' will already be met (i.e. don't bother running those commands... they won't work anyways)
|| deML ||
Follow instructions provided at:
https://github.com/grenaud/deML
|| phyloseq ||
conda install -c r-igraph
Rscript -e "source('http://bioconductor.org/biocLite.R');biocLite('phyloseq')"
|| FUNGuild ||
download FUNGUild script:
https://raw.githubusercontent.com/UMNFuN/FUNGuild/master/Guilds_v1.1.py
|| PEAR ||
download at: https://sco.h-its.org/exelixis/web/software/pear/
Citations
Gweon, H. S., Oliver, A., Taylor, J., Booth, T., Gibbs, M., Read, D. S., et al. (2015). PIPITS: an automated pipeline for analyses of fungal internal transcribed spacer sequences from the Illumina sequencing platform. Methods in ecology and evolution, 6(8), 973-980.
Renaud, G., Stenzel, U., Maricic, T., Wiebe, V., & Kelso, J. (2014). deML: robust demultiplexing of Illumina sequences using a likelihood-based approach. Bioinformatics, 31(5), 770-772.
McMurdie and Holmes (2013) phyloseq: An R Package for Reproducible Interactive Analysis and Graphics of Microbiome Census Data. PLoS ONE. 8(4):e61217
Nguyen NH, Song Z, Bates ST, Branco S, Tedersoo L, Menke J, Schilling JS, Kennedy PG. 2016. FUNGuild: An open annotation tool for parsing fungal community datasets by ecological guild. Fungal Ecology 20:241–248.
Zhang J, Kobert K, Flouri T, Stamatakis A. 2013. PEAR: a fast and accurate Illumina Paired-End reAd mergeR. Bioinformatics, 30(5): 614-620.
Last Modified by R. Wilhelm on January 2nd, 2018
Step 1: User Input
End of explanation
# Ignore all the 'conflict' errors. The reads are paired so the conflicts are bogus (i.e. it gives a warning everytime an barcode appears in multiple samples, but no pairs are duplicated)
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = directory+dataset[2]
index1 = directory+indexFile1
index2 = directory+indexFile2
read1 = directory+readFile1
read2 = directory+readFile2
# Make output directory
%mkdir $directory/pipits_input/
# Run deML ## Note: you may get error involving 'ulimit'. If so, exit your notebook. Enter 'ulimit -n 9999' at the command line, then restart a new notebook.
!deML -i $metadata -f $read1 -r $read2 -if1 $index1 -if2 $index2 -o $directory/pipits_input/$name
# Remove unnecessary 'failed' reads and index files
%rm $directory/pipits_input/*.fail.* $directory/pipits_input/unknown*
Explanation: Step 2: Demultiplex Raw Reads
End of explanation
import glob, re
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
# Remove Previously Prepended Name (PIPITS wanted something)
for file in glob.glob(directory+"pipits_input/"+name+"_*"):
new_name = re.sub(name+"_","",file)
os.rename(file, new_name)
# Rename files with with extension .fq (PIPITS is PICKY)
for file in glob.glob(directory+"pipits_input/*.fq.gz"):
new_name = re.sub(".fq.gz",".fastq.gz",file)
os.rename(file, new_name)
# Remove Unbinned Reads
%rm $directory/pipits_input/unknown*
# Run PIPITS List Prep
input_dir = directory+"pipits_input/"
output_dir = directory+name+".readpairslist.txt"
!pipits_getreadpairslist -i $input_dir -o $output_dir -f
Explanation: Step 3: Make Sample Mapping File (aka. 'readpairlist')
End of explanation
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
input_dir = directory+"pipits_input/"
output_dir = directory+"pipits_prep/"
readpairfile = directory+name+".readpairslist.txt"
!pipits_prep -i $input_dir -o $output_dir -l $readpairfile
Explanation: Step 4: Pre-process Data with PIPITS (merge and QC)
End of explanation
ITS_Region = "ITS1"
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
input_file = directory+"pipits_prep/prepped.fasta"
output_dir = directory+"pipits_funits/"
!pipits_funits -i $input_file -o $output_dir -x $ITS_Region
Explanation: Step 4: Extract Variable Region (User Input Required)
End of explanation
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
input_file = directory+"pipits_funits/ITS.fasta"
output_dir = directory+"PIPITS_final/"
!pipits_process -i $input_file -o $output_dir --Xmx 20G
Explanation: Step 5: Cluster and Assign Taxonomy
End of explanation
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
# Prepare PIPITS output for FUNGuild
!pipits_funguild.py -i $directory/PIPITS_final/otu_table.txt -o $directory/PIPITS_final/otu_table_funguild.txt
# Run FUNGuild
!python /home/db/FUNGuild/Guilds_v1.1.py -otu $directory/PIPITS_final/otu_table_funguild.txt -db fungi -m -u
Explanation: Step 6: Push OTU Table through FUNGuild
End of explanation
## Setup R-Magic for Jupyter Notebooks
import rpy2
import pandas as pd
%load_ext rpy2.ipython
%R library(phyloseq)
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = dataset[2]
# Input Biom
biom = directory+"/PIPITS_final/otu_table.biom"
%R -i biom
%R x <- import_biom(biom)
# Fix taxonomy table
%R colnames(tax_table(x)) <- c("Domain","Phylum","Class","Order","Family","Genus","Species")
%R tax_table(x) = gsub("k__| p__| c__| o__| f__| g__| s__","",tax_table(x))
# Merge Mapping into Phyloseq
sample_file = pd.read_table(directory+metadata, keep_default_na=False)
%R -i sample_file
%R rownames(sample_file) <- sample_file$X.SampleID
%R sample_file$X.SampleID <- NULL
%R sample_file <- sample_data(sample_file)
%R p <- merge_phyloseq(x, sample_file)
# Save Phyloseq Object as '.rds'
output = directory+"/PIPITS_final/p_"+name+".pipits.final.rds"
%%R -i output
%%R saveRDS(p, file = output)
# Confirm Output
%R print(p)
Explanation: Step 7: Import into R
End of explanation
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
%rm -r $directory/pipits_prep/
%rm -r $directory/pipits_funits/
%rm -r $directory/pipits_input/
del_me = directory+name+".readpairslist.txt"
%rm $del_me
Explanation: Step 7: Clean-up Intermediate Files and Final Outputs
End of explanation |
516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NCC
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:25
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kind recipe to extract clusters from thresholded SPMt maps and make them as a map of ROIs
First threshold your SPMt map
Step1: Define a folder containing rough hand-drawn ROIs over the clusters
The given names are very approximative with respect to the rough location of the cluster.
Step2: Take each hand-drawn ROI and generate a cleaned version of it using the intersection with the thresholded map
Step3: Collect all the cleaned ROIs in a single Nifti
Step4: Compile values from create ROI
Step5: Generate a plot with these ROI values
With relation to age, grouping subjects by genotypes, and correcting values for covariates | Python Code:
original_fp = '/home/grg/spm/analyses/analysis_20170228/MD_DARTEL_csf5_interaction_linearage/estimatecontrasts/spmT_0028.nii'
thresholded_map, threshold = thresholding.map_threshold(original_fp, threshold=1e-3)
thresholded_fp = '/tmp/thresholded_map.nii.gz'
thresholded_map.to_filename(thresholded_fp) # Save it on disk
Explanation: Kind recipe to extract clusters from thresholded SPMt maps and make them as a map of ROIs
First threshold your SPMt map
End of explanation
roi_dir = '/home/grg/spm/ROIapoE/ROI_DARTEL/csf5/'
rois_fp = [e for e in glob(osp.join(roi_dir, '*.nii.gz')) if not '_cleaned' in e and not 'rois' in e]
print rois_fp
print len(rois_fp), 'files found.'
Explanation: Define a folder containing rough hand-drawn ROIs over the clusters
The given names are very approximative with respect to the rough location of the cluster.
End of explanation
thresholded_img = np.asarray(nib.load(thresholded_fp).dataobj)
for roi_fp in rois_fp:
print roi_fp
roi = np.array(image.load_img(roi_fp).dataobj)
roi[thresholded_img==0] = 0
img = image.new_img_like(roi_fp, roi)
plotting.plot_roi(img)
img.to_filename(osp.splitext(roi_fp)[0] + '_cleaned.nii.gz')
Explanation: Take each hand-drawn ROI and generate a cleaned version of it using the intersection with the thresholded map
End of explanation
rois_fp = '/tmp/rois2.nii.gz' # File where the ROIs will be collected
roi_fps = glob(osp.join(roi_dir, '*_cleaned.nii.gz'))
print roi_fps
print len(roi_fps), 'cleaned ROIs found'
img = np.asarray(image.load_img(roi_fps[0]).dataobj)
for i in range(1, len(roi_fps)):
new_img = np.asarray(image.load_img(roi_fps[i]).dataobj)
img[new_img!=0] = i+1
finalimg = image.new_img_like(roi_fps[0], img)
finalimg.to_filename(rois_fp)
plotting.plot_roi(finalimg)
Explanation: Collect all the cleaned ROIs in a single Nifti
End of explanation
roivalues_wd = '/tmp/roivalues_csf.5' # Folder where the files containing the ROI values will be stored
data_wd = '/home/grg/dartel_csf.5/' # Folder containing the images over which the ROI values will be extracted
subjects = json.load(open(osp.join('/home/grg/spm', 'data', 'subjects.json'))) # List of subjects
# Load the collection of ROIs
rois = np.asarray(nib.load(rois_fp).dataobj)
nb_roi = len(np.unique(rois)) - 1
print nb_roi, 'regions - ', len(subjects), 'subjects'
# Iterate over subjects
for s in subjects:
try:
mdfp = glob(osp.join(data_wd, 'rswr%s*.nii'%s))[0]
# Build the command and run it
cmd = 'AimsRoiFeatures -i %s -s %s -o %s'%(rois_fp, mdfp, osp.join(roivalues_wd, '%s_stats.csv'%s))
print cmd
os.system(cmd)
except Exception as e:
print s, e
Explanation: Compile values from create ROI
End of explanation
%run /home/grg/git/alfa/roicollect.py
%matplotlib inline
from IPython.display import display
data = pd.read_excel('/tmp/covariates.xls') # a table containing ApoE group, gender, educational years, ventricles
data['subject'] = data.index # Adding subject as an extra column
display(data.head())
regions = [0,1] #list(np.unique(rois))
regions.remove(0)
plot_regions(data, regions, src=roivalues_wd)
Explanation: Generate a plot with these ROI values
With relation to age, grouping subjects by genotypes, and correcting values for covariates
End of explanation |
518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS446/546 - Class Session 19 - Correlation network
In this class session we are going to analyze gene expression data from a human bladder cancer cohort, using python. We will load a data matrix of expression measurements of 4,473 genes in 414 different bladder cancer samples. These genes have been selected because they are differentially expressed between normal bladder and bladder cancer (thus more likely to have a function in bladder cancer specifically), but the columns in the data matrix are restricted to bladder cancer samples (not normal bladder) because we want to obtain a network representing variation across cancers. The measurements in the matrix have already been normalized to account for inter-sample heterogeneity and then log2 transformed. Our job is to compute Pearson correlation coefficients between all pairs of genes, obtain Fisher-transformed z-scores for all pairs of genes, test each pair of genes for significance of the z score, adjust for multiple hypothesis testing, filter to eliminate any pair for which R < 0.75 or Padj > 0.01, load the graph into an igraph.Graph object, and plot the degree distribution on log-log scale. We will then answer two questions
Step1: Using pandas.read_csv, load the tab-deliminted text file of gene expression measurements (rows correspond to genes, columns correspond to bladder tumor samples), into a data frame gene_matrix_for_network_df.
Step2: Use the pandas.DataFrame.as_matrix method to make a matrix gene_matrix_for_network. Print out the dimensions of the matrix, by accessing its shape variable
Step3: Use del to delete the data frame, since we no longer need it (save memory)
Look at the online help for the numpy.corrcoef function, using help(numpy.corrcoef). When you pass a single argument x which is a 2D "array" (i.e., a matrix), by default does corrcoef compute coefficients for pairs of rows, or pairs of columns?
Step4: Compute the 4,473 x 4,473 matrix of gene-gene Pearson correlation coefficients, using numpy.corrcoef (this function treats each row as a variable, so you don't have to do any transposing of the matrix, unlike the situation in R).
Step5: Look at the online help for numpy.fill_diagonal. Does it return the modified matrix or modify the matrix argument in place?
Step6: Set the diagonal elements of the matrix to zero, using numpy.fill_diagonal
Step7: Look at the online help for numpy.multiply. Does it do element-wise multiplication or matrix multiplication?
Step8: Look at the online help for numpy.tri. Does it modify a matrix argument in-place or return a matrix? What is in the matrix that it returns?
Step9: Set the upper-triangle of the matrix to zero, using numpy.multiply and numpy.tri
Step10: Using numpy.where, get a tuple of two numpy.arrays containing the row/col indices of the entries of the matrix for which R >= 0.75. Use array indexing to obtain the R values for these matrix entries, as a numpy array cor_coeff_values_above_thresh.
Step11: Refer to Eq. (13.5) in the assigned readding for today's class (p9 of the PDF). Obtain a numpy array of the correlation coefficients that exceeded 0.75, and Fisher-transform the correlation coefficient values to get a vector z_scores of z scores. Each of these z scores will correspond to an edge in the network, unless the absolute z score is too small such that we can't exclude the null hypothesis that the corresponding two genes' expression values are indepdenent (we will perform that check in the next step).
Step12: Delete the correlation matrix object in order to save memory (we won't need it from here on out).
Assume that under the null hypothesis that two genes are independent, then sqrt(M-3)z for the pair of genes is an independent sample from the normal distribution with zero mean and unit variance, where M is the number of samples used to compute the Pearson correlation coefficient (i.e., M = 414). For each entry in z_scores compute a P value as the area under two tails of the normal distribution N(x), where the two tails are x < -sqrt(M-3)z and x > sqrt(M-3)z. (You'll know you are doing it right if z=0 means you get a P value of 1). You will want to use the functions numpy.abs and scipy.stats.norm.cdf, as well as the math.sqrt function (in order to compute the square root).
Step13: Adjust the P values for multiple hypothesis testing, using the statsmodels.sandbox.stats.multicomp.multipletests function wth method="fdr_bh"
Step14: Verify that we don't need to drop any entries due to the adjusted P value not being small enough (use numpy.where and len); this should produce zero since we have M=414 samples per gene.
Step15: Read the online help for the function zip. What does it do?
Step16: We want to pass our tuple of numpy arrays containing row and column indices to Graph.TupleList; however, Graph.TupleList accepts a tuple list, not a tuple of numpy arrays. So we need to make a tuple list, using zip
Step17: Make an undirected graph from the row/column indices of the (upper-triangle) gene pairs whose correlations were above our threshold, using igraph.Graph.TupleList. Print a summary of the network, as a sanity check, using the igraph.Graph.summary method.
Step18: Plot the degree distribution on log-log scale; does it appear to be scale-free? | Python Code:
import pandas
import scipy.stats
import matplotlib
import pylab
import numpy
import statsmodels.sandbox.stats.multicomp
import igraph
import math
Explanation: CS446/546 - Class Session 19 - Correlation network
In this class session we are going to analyze gene expression data from a human bladder cancer cohort, using python. We will load a data matrix of expression measurements of 4,473 genes in 414 different bladder cancer samples. These genes have been selected because they are differentially expressed between normal bladder and bladder cancer (thus more likely to have a function in bladder cancer specifically), but the columns in the data matrix are restricted to bladder cancer samples (not normal bladder) because we want to obtain a network representing variation across cancers. The measurements in the matrix have already been normalized to account for inter-sample heterogeneity and then log2 transformed. Our job is to compute Pearson correlation coefficients between all pairs of genes, obtain Fisher-transformed z-scores for all pairs of genes, test each pair of genes for significance of the z score, adjust for multiple hypothesis testing, filter to eliminate any pair for which R < 0.75 or Padj > 0.01, load the graph into an igraph.Graph object, and plot the degree distribution on log-log scale. We will then answer two questions: (1) does the network look to be scale-free? and (2) what is it's best-fit scaling exponent?
We will start by importing all of the modules that we will need for this notebook. Note the difference in language-design philosophy between R (which requires loading one package for this analysis) and python (where we have to load seven modules). Python keeps its core minimal, whereas R has a lot of statistical and plotting functions in the base language (or in packages that are loaded by default).
End of explanation
gene_matrix_for_network_df =
Explanation: Using pandas.read_csv, load the tab-deliminted text file of gene expression measurements (rows correspond to genes, columns correspond to bladder tumor samples), into a data frame gene_matrix_for_network_df.
End of explanation
gene_matrix_for_network =
gene_matrix_for_network.shape
Explanation: Use the pandas.DataFrame.as_matrix method to make a matrix gene_matrix_for_network. Print out the dimensions of the matrix, by accessing its shape variable
End of explanation
help(numpy.corrcoef)
Explanation: Use del to delete the data frame, since we no longer need it (save memory)
Look at the online help for the numpy.corrcoef function, using help(numpy.corrcoef). When you pass a single argument x which is a 2D "array" (i.e., a matrix), by default does corrcoef compute coefficients for pairs of rows, or pairs of columns?
End of explanation
gene_matrix_for_network_cor =
Explanation: Compute the 4,473 x 4,473 matrix of gene-gene Pearson correlation coefficients, using numpy.corrcoef (this function treats each row as a variable, so you don't have to do any transposing of the matrix, unlike the situation in R).
End of explanation
help(numpy.fill_diagonal)
Explanation: Look at the online help for numpy.fill_diagonal. Does it return the modified matrix or modify the matrix argument in place?
End of explanation
numpy.fill_diagonal( ## fill in here ## )
Explanation: Set the diagonal elements of the matrix to zero, using numpy.fill_diagonal
End of explanation
help(numpy.multiply)
Explanation: Look at the online help for numpy.multiply. Does it do element-wise multiplication or matrix multiplication?
End of explanation
help(numpy.tri)
Explanation: Look at the online help for numpy.tri. Does it modify a matrix argument in-place or return a matrix? What is in the matrix that it returns?
End of explanation
gene_matrix_for_network_cor = numpy.multiply(gene_matrix_for_network_cor, numpy.tri(*gene_matrix_for_network_cor.shape))
Explanation: Set the upper-triangle of the matrix to zero, using numpy.multiply and numpy.tri:
End of explanation
inds_correl_above_thresh =
cor_coeff_values_above_thresh =
Explanation: Using numpy.where, get a tuple of two numpy.arrays containing the row/col indices of the entries of the matrix for which R >= 0.75. Use array indexing to obtain the R values for these matrix entries, as a numpy array cor_coeff_values_above_thresh.
End of explanation
z_scores =
Explanation: Refer to Eq. (13.5) in the assigned readding for today's class (p9 of the PDF). Obtain a numpy array of the correlation coefficients that exceeded 0.75, and Fisher-transform the correlation coefficient values to get a vector z_scores of z scores. Each of these z scores will correspond to an edge in the network, unless the absolute z score is too small such that we can't exclude the null hypothesis that the corresponding two genes' expression values are indepdenent (we will perform that check in the next step).
End of explanation
M = gene_matrix_for_network.shape[1]
P_values =
Explanation: Delete the correlation matrix object in order to save memory (we won't need it from here on out).
Assume that under the null hypothesis that two genes are independent, then sqrt(M-3)z for the pair of genes is an independent sample from the normal distribution with zero mean and unit variance, where M is the number of samples used to compute the Pearson correlation coefficient (i.e., M = 414). For each entry in z_scores compute a P value as the area under two tails of the normal distribution N(x), where the two tails are x < -sqrt(M-3)z and x > sqrt(M-3)z. (You'll know you are doing it right if z=0 means you get a P value of 1). You will want to use the functions numpy.abs and scipy.stats.norm.cdf, as well as the math.sqrt function (in order to compute the square root).
End of explanation
P_values_adj = statsmodels.sandbox.stats.multicomp.multipletests(P_values, method="fdr_bh")[1]
Explanation: Adjust the P values for multiple hypothesis testing, using the statsmodels.sandbox.stats.multicomp.multipletests function wth method="fdr_bh"
End of explanation
len(numpy.where(P_values_adj >= 0.01)[0])
Explanation: Verify that we don't need to drop any entries due to the adjusted P value not being small enough (use numpy.where and len); this should produce zero since we have M=414 samples per gene.
End of explanation
help(zip)
Explanation: Read the online help for the function zip. What does it do?
End of explanation
row_col_inds_tuple_list =
## [note this can be done more elegantly using the unary "*" operator:
## row_col_inds_tuple_list = zip(*inds_correl_above_thresh)
## see how we only need to type the variable name once, if we use the unary "*" ]
Explanation: We want to pass our tuple of numpy arrays containing row and column indices to Graph.TupleList; however, Graph.TupleList accepts a tuple list, not a tuple of numpy arrays. So we need to make a tuple list, using zip:
End of explanation
final_network =
final_network.summary()
Explanation: Make an undirected graph from the row/column indices of the (upper-triangle) gene pairs whose correlations were above our threshold, using igraph.Graph.TupleList. Print a summary of the network, as a sanity check, using the igraph.Graph.summary method.
End of explanation
degree_dist =
Explanation: Plot the degree distribution on log-log scale; does it appear to be scale-free?
End of explanation |
519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task
Step1: Define path to data
Step2: A few basic libraries that we'll need for the initial exercises
Step3: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
Step4: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline
Step5: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
i.e. Vgg() model will return us 这个图片对于所有imagenet上1000多个种类的概率。然而cats,dogs并不是imagenet上的种类。imagenet上的种类更细致。
First, create a Vgg16 object
Step6: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder
Step7: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
Step8: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
Step9: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
Step10: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four
Step11: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
Step12: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
Step13: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
Step14: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras
Step15: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
Step16: Here's a few examples of the categories we just imported
Step17: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition
Step18: ...and here's the fully-connected definition.
Step19: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model
Step20: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
Step21: We'll learn about what these different blocks do later in the course. For now, it's enough to know that
Step22: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
Step23: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
Step24: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data
Step25: From here we can use exactly the same steps as before to look at predictions from the model.
Step26: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label. | Python Code:
%matplotlib inline
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as of 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
path = "data/dogscats/sample/"
#path = "data/dogscats"
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
from importlib import reload
import utils; reload(utils)
from utils import plots
# reload is handy if you change something in the file
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
vgg = Vgg16()
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
i.e. Vgg() model will return us 这个图片对于所有imagenet上1000多个种类的概率。然而cats,dogs并不是imagenet上的种类。imagenet上的种类更细致。
First, create a Vgg16 object:
End of explanation
batches = vgg.get_batches(path+'train', batch_size=4)
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
imgs,labels = next(batches)
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
plots(imgs, titles=labels)
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
vgg.predict(imgs, True)
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
vgg.classes[:4]
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
vgg.finetune(batches)
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
vgg.fit(batches, val_batches, nb_epoch=1)
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
classes[:5]
Explanation: Here's a few examples of the categories we just imported:
End of explanation
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
Explanation: ...and here's the fully-connected definition.
End of explanation
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
model = VGG_16()
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
batch_size = 4
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation |
520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix factorization is a very interesting area of machine learning research. Formulating a problem as a 2D matrix $X$ to be decomposed into multiple matrices, which combine to return an approximation of $X$, can lead to state of the art results for many interesting problems. This core concept is the focus of compressive sensing, matrix completion, sparse coding, robust PCA, dictionary learning, and many other algorithms. One major website which shows many different types of matrix decomposition algorithms is the Matrix Factorization Jungle, run by Igor Carron. There has been a heavy focus on random projections in recent algorithms, which can often lead to increased stability and computationally efficient solutions.
<!-- TEASER_END -->
Below is a link to the GoDec algorithm output, as applied to the "Hall" video (shown below) found in this zip file, which is a surveillance tape taken from a mall. Using the GoDec algorithm, the background is almost completely subtracted from the noisy elements of people walking, while still capturing periodic background elements as part of the background. I have written code for both the GoDec and Robust PCA algorithms in numpy based on their Matlab equivalents. There are many datasets which can be found here, and we will set up a simple download function for ease-of-access. Special thanks to @kuantkid for the PyRPCA repo, which was the inspiration to start and extend this work, and especially the idea of creating a demo video from PNGs which is PRETTY. DANG. AWESOME.
Interstellar Overdrive
Step1: First we want to download a video, so that we can compare the algorithmic result against the original video. The file is downloaded, if it does not already exist in the working directory. Next, it will create a directory of the same name, and unzip the file contents (Campus.zip to Campus/filename).
Step2: The code below will read in all the .bmp images downloaded and unzipped from the website, as well as converting to grayscale, scaling the result between 0 and 1. Eventually, I plan to do a "full-color" version of this testing, but for now the greyscale will have to suffice.
Step4: Robust PCA
Robust Principal Component Analysis (PCA) is an extension of PCA. Rather than attempting to solve $X = L$, where $L$ is typically a low-rank approximation ($N \times M$, vs. $N \times P$, $M < P$), Robust PCA solves the factorization problem $X = L + S$, where $L$ is a low-rank approximation, and $S$ is a sparse component. By separating the factorization into two separate matrix components, Robust PCA makes a much better low-rank estimate $L$ on many problems.
There are a variety of algorithms to solve this optimization problem. The code below is an implementation of the Inexact Augmented Lagrangian Multiplier algorithm for Robust PCA which is identical to the equivalent MATLAB code (download), or as near as I could make it. The functionality seems equivalent, and for relevant details please see the paper. This algorithm was chosen because according to the timing results at the bottom of this page, it was both the fastest and most accurate of the formulas listed. Though it appears to be fairly slow in our testing, it is fully believable that this is an implementation issue, since this code has not been specifically optimized for numpy. Due to this limitation, we clip the algorithm to the first few frames to save time.
Step5: GoDec
The code below contains an implementation of the GoDec algorithm, which attempts to solve the problem $X = L + S + G$, with $L$ low-rank, $S$ sparse, and $G$ as a component of Gaussian noise. By allowing the decomposition to expand to 3 matrix components, the algorithm is able to more effectively differentiate the sparse component from the low-rank.
Step6: A Momentary Lapse of Reason
Now it is time to do something a little unreasonable - we can actually take all of this data, reshape it into a series of images, and plot it as a video inside the IPython notebook! The first step is to generate the frames for the video as .png files, as shown below.
Step7: Echoes
The code below will display HTML5 video for each of the videos generated in the previos step, and embed it in the IPython notebook. There are "echoes" of people, which are much more pronounced in the Robust PCA video than the GoDec version, likely due to the increased flexibility of an independent Gaussian term. Overall, the effect is pretty cool though not mathematically as good as the GoDec result.
Step8: If these videos freeze for some reason, just hit refresh and they should start playing. | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('JgfK46RA8XY')
Explanation: Matrix factorization is a very interesting area of machine learning research. Formulating a problem as a 2D matrix $X$ to be decomposed into multiple matrices, which combine to return an approximation of $X$, can lead to state of the art results for many interesting problems. This core concept is the focus of compressive sensing, matrix completion, sparse coding, robust PCA, dictionary learning, and many other algorithms. One major website which shows many different types of matrix decomposition algorithms is the Matrix Factorization Jungle, run by Igor Carron. There has been a heavy focus on random projections in recent algorithms, which can often lead to increased stability and computationally efficient solutions.
<!-- TEASER_END -->
Below is a link to the GoDec algorithm output, as applied to the "Hall" video (shown below) found in this zip file, which is a surveillance tape taken from a mall. Using the GoDec algorithm, the background is almost completely subtracted from the noisy elements of people walking, while still capturing periodic background elements as part of the background. I have written code for both the GoDec and Robust PCA algorithms in numpy based on their Matlab equivalents. There are many datasets which can be found here, and we will set up a simple download function for ease-of-access. Special thanks to @kuantkid for the PyRPCA repo, which was the inspiration to start and extend this work, and especially the idea of creating a demo video from PNGs which is PRETTY. DANG. AWESOME.
Interstellar Overdrive
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
try:
from urllib2 import urlopen
except ImportError:
from urllib.request import urlopen
from scipy.io import loadmat, savemat
import os
ext = {"water":'WaterSurface.zip',
"fountain":'Fountain.zip',
"campus":'Campus.zip',
"escalator": 'Escalator.zip',
"curtain": 'Curtain.zip',
"lobby": 'Lobby.zip',
"mall": 'ShoppingMall.zip',
"hall": 'hall.zip',
"bootstrap": 'Bootstrap.zip'}
example = "mall"
def progress_bar_downloader(url, fname, progress_update_every=5):
#from http://stackoverflow.com/questions/22676/how-do-i-download-a-file-over-http-using-python/22776#22776
u = urlopen(url)
f = open(fname, 'wb')
meta = u.info()
file_size = int(meta.get("Content-Length"))
print("Downloading: %s Bytes: %s" % (fname, file_size))
file_size_dl = 0
block_sz = 8192
p = 0
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
if (file_size_dl * 100. / file_size) > p:
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
print(status)
p += progress_update_every
f.close()
def get_video_clip(d):
#Download files from http://perception.i2r.a-star.edu.sg/bk_model/bk_index.html
if os.path.exists('./' + d):
print('Video file %s already downloaded, continuing' % d)
return
else:
print('Video file %s not found, downloading' % d)
progress_bar_downloader(r'http://perception.i2r.a-star.edu.sg/BK_Model_TestData/' + d, d)
def bname(x): return x.split('.')[0]
get_video_clip(ext[example])
if not os.path.exists('./' + bname(ext[example])):
os.makedirs(bname(ext[example]))
os.system('unzip ' + ext[example] + ' -d ' + bname(ext[example]))
Explanation: First we want to download a video, so that we can compare the algorithmic result against the original video. The file is downloaded, if it does not already exist in the working directory. Next, it will create a directory of the same name, and unzip the file contents (Campus.zip to Campus/filename).
End of explanation
from scipy import misc
import numpy as np
from glob import glob
def rgb2gray(rgb):
r, g, b = rgb[:, :, 0], rgb[:, :, 1], rgb[:, :, 2]
gray = 0.2989 * r + 0.5870 * g + 0.1140 * b
return gray / 255.
fdir = bname(ext[example])
names = sorted(glob(fdir + "/*.bmp"))
d1, d2, channels = misc.imread(names[0]).shape
d1 = 128
d2 = 160
num = len(names)
X = np.zeros((d1, d2, num))
for n, i in enumerate(names):
X[:, :, n] = misc.imresize(rgb2gray(misc.imread(i).astype(np.double)) / 255., (d1, d2))
X = X.reshape(d1 * d2, num)
clip = 100
print(X.shape)
print(d1)
print(d2)
Explanation: The code below will read in all the .bmp images downloaded and unzipped from the website, as well as converting to grayscale, scaling the result between 0 and 1. Eventually, I plan to do a "full-color" version of this testing, but for now the greyscale will have to suffice.
End of explanation
import numpy as np
from numpy.linalg import norm, svd
def inexact_augmented_lagrange_multiplier(X, lmbda=.01, tol=1e-3,
maxiter=100, verbose=True):
Inexact Augmented Lagrange Multiplier
Y = X
norm_two = norm(Y.ravel(), 2)
norm_inf = norm(Y.ravel(), np.inf) / lmbda
dual_norm = np.max([norm_two, norm_inf])
Y = Y / dual_norm
A = np.zeros(Y.shape)
E = np.zeros(Y.shape)
dnorm = norm(X, 'fro')
mu = 1.25 / norm_two
rho = 1.5
sv = 10.
n = Y.shape[0]
itr = 0
while True:
Eraw = X - A + (1 / mu) * Y
Eupdate = np.maximum(Eraw - lmbda / mu, 0) + np.minimum(Eraw + lmbda / mu, 0)
U, S, V = svd(X - Eupdate + (1 / mu) * Y, full_matrices=False)
svp = (S > 1 / mu).shape[0]
if svp < sv:
sv = np.min([svp + 1, n])
else:
sv = np.min([svp + round(.05 * n), n])
Aupdate = np.dot(np.dot(U[:, :svp], np.diag(S[:svp] - 1 / mu)), V[:svp, :])
A = Aupdate
E = Eupdate
Z = X - A - E
Y = Y + mu * Z
mu = np.min([mu * rho, mu * 1e7])
itr += 1
if ((norm(Z, 'fro') / dnorm) < tol) or (itr >= maxiter):
break
if verbose:
print("Finished at iteration %d" % (itr))
return A, E
sz = clip
A, E = inexact_augmented_lagrange_multiplier(X[:, :sz])
A = A.reshape(d1, d2, sz) * 255.
E = E.reshape(d1, d2, sz) * 255.
#Refer to them by position desired for video demo later
savemat("./IALM_background_subtraction.mat", {"1": A, "2": E})
print("RPCA complete")
Explanation: Robust PCA
Robust Principal Component Analysis (PCA) is an extension of PCA. Rather than attempting to solve $X = L$, where $L$ is typically a low-rank approximation ($N \times M$, vs. $N \times P$, $M < P$), Robust PCA solves the factorization problem $X = L + S$, where $L$ is a low-rank approximation, and $S$ is a sparse component. By separating the factorization into two separate matrix components, Robust PCA makes a much better low-rank estimate $L$ on many problems.
There are a variety of algorithms to solve this optimization problem. The code below is an implementation of the Inexact Augmented Lagrangian Multiplier algorithm for Robust PCA which is identical to the equivalent MATLAB code (download), or as near as I could make it. The functionality seems equivalent, and for relevant details please see the paper. This algorithm was chosen because according to the timing results at the bottom of this page, it was both the fastest and most accurate of the formulas listed. Though it appears to be fairly slow in our testing, it is fully believable that this is an implementation issue, since this code has not been specifically optimized for numpy. Due to this limitation, we clip the algorithm to the first few frames to save time.
End of explanation
import numpy as np
from numpy.linalg import norm
from scipy.linalg import qr
def wthresh(a, thresh):
#Soft wavelet threshold
res = np.abs(a) - thresh
return np.sign(a) * ((res > 0) * res)
#Default threshold of .03 is assumed to be for input in the range 0-1...
#original matlab had 8 out of 255, which is about .03 scaled to 0-1 range
def go_dec(X, thresh=.03, rank=2, power=0, tol=1e-3,
max_iter=100, random_seed=0, verbose=True):
m, n = X.shape
if m < n:
X = X.T
m, n = X.shape
L = X
S = np.zeros(L.shape)
itr = 0
random_state = np.random.RandomState(random_seed)
while True:
Y2 = random_state.randn(n, rank)
for i in range(power + 1):
Y1 = np.dot(L, Y2)
Y2 = np.dot(L.T, Y1);
Q, R = qr(Y2, mode='economic')
L_new = np.dot(np.dot(L, Q), Q.T)
T = L - L_new + S
L = L_new
S = wthresh(T, thresh)
T -= S
err = norm(T.ravel(), 2)
if (err < tol) or (itr >= max_iter):
break
L += T
itr += 1
#Is this even useful in soft GoDec? May be a display issue...
G = X - L - S
if m < n:
L = L.T
S = S.T
G = G.T
if verbose:
print("Finished at iteration %d" % (itr))
return L, S, G
sz = clip
L, S, G = go_dec(X[:, :sz])
L = L.reshape(d1, d2, sz) * 255.
S = S.reshape(d1, d2, sz) * 255.
G = G.reshape(d1, d2, sz) * 255.
savemat("./GoDec_background_subtraction.mat", {"1": L, "2": S, "3": G, })
print("GoDec complete")
Explanation: GoDec
The code below contains an implementation of the GoDec algorithm, which attempts to solve the problem $X = L + S + G$, with $L$ low-rank, $S$ sparse, and $G$ as a component of Gaussian noise. By allowing the decomposition to expand to 3 matrix components, the algorithm is able to more effectively differentiate the sparse component from the low-rank.
End of explanation
import os
import sys
import matplotlib.pyplot as plt
from scipy.io import loadmat
import numpy as np
from matplotlib import cm
import matplotlib
#demo inspired by / stolen from @kuantkid on Github - nice work!
def mlabdefaults():
matplotlib.rcParams['lines.linewidth'] = 1.5
matplotlib.rcParams['savefig.dpi'] = 300
matplotlib.rcParams['font.size'] = 22
matplotlib.rcParams['font.family'] = "Times New Roman"
matplotlib.rcParams['legend.fontsize'] = "small"
matplotlib.rcParams['legend.fancybox'] = True
matplotlib.rcParams['lines.markersize'] = 10
matplotlib.rcParams['figure.figsize'] = 8, 5.6
matplotlib.rcParams['legend.labelspacing'] = 0.1
matplotlib.rcParams['legend.borderpad'] = 0.1
matplotlib.rcParams['legend.borderaxespad'] = 0.2
matplotlib.rcParams['font.monospace'] = "Courier New"
matplotlib.rcParams['savefig.dpi'] = 200
def make_video(alg, cache_path='/tmp/matrix_dec_tmp'):
name = alg
if not os.path.exists(cache_path):
os.mkdir(cache_path)
#If you generate a big
if not os.path.exists('%s/%s_tmp'%(cache_path, name)):
os.mkdir("%s/%s_tmp"%(cache_path, name))
mat = loadmat('./%s_background_subtraction.mat'%(name))
org = X.reshape(d1, d2, X.shape[1]) * 255.
fig = plt.figure()
ax = fig.add_subplot(111)
usable = [x for x in sorted(mat.keys()) if "_" not in x][0]
sz = min(org.shape[2], mat[usable].shape[2])
for i in range(sz):
ax.cla()
ax.axis("off")
ax.imshow(np.hstack([mat[x][:, :, i] for x in sorted(mat.keys()) if "_" not in x] + \
[org[:, :, i]]), cm.gray)
fname_ = '%s/%s_tmp/_tmp%03d.png'%(cache_path, name, i)
if (i % 25) == 0:
print('Completed frame', i, 'of', sz, 'for method', name)
fig.tight_layout()
fig.savefig(fname_, bbox_inches="tight")
#Write out an mp4 and webm video from the png files. -r 5 means 5 frames a second
#libx264 is h.264 encoding, -s 160x130 is the image size
#You may need to sudo apt-get install libavcodec
plt.close()
num_arrays = na = len([x for x in mat.keys() if "_" not in x])
cdims = (na * d1, d2)
cmd_h264 = "ffmpeg -y -r 10 -i '%s/%s_tmp/_tmp%%03d.png' -c:v libx264 " % (cache_path, name) + \
"-s %dx%d -preset ultrafast -pix_fmt yuv420p %s_animation.mp4" % (cdims[0], cdims[1], name)
cmd_vp8 = "ffmpeg -y -r 10 -i '%s/%s_tmp/_tmp%%03d.png' -c:v libvpx " % (cache_path, name) + \
"-s %dx%d -preset ultrafast -pix_fmt yuv420p %s_animation.webm" % (cdims[0], cdims[1], name)
os.system(cmd_h264)
os.system(cmd_vp8)
if __name__ == "__main__":
mlabdefaults()
all_methods = ['IALM', 'GoDec']
for name in all_methods:
make_video(name);
print("Background is generated from this file:", example)
Explanation: A Momentary Lapse of Reason
Now it is time to do something a little unreasonable - we can actually take all of this data, reshape it into a series of images, and plot it as a video inside the IPython notebook! The first step is to generate the frames for the video as .png files, as shown below.
End of explanation
from IPython.display import HTML
from base64 import b64encode
def html5_video(alg, frames):
#This *should* support all browsers...
framesz = 250
info = {"mp4": {"ext":"mp4", "encoded": '', "size":(frames * framesz, framesz)}}
html_output = []
for k in info.keys():
f = open("%s_animation.%s" % (alg, info[k]["ext"]), "rb").read()
encoded = b64encode(f).decode('ascii')
video_tag = '<video width="500" height="250" autoplay="autoplay" ' + \
'loop src="data:video/%s;base64,%s">' % (k, encoded)
html_output.append(video_tag)
return HTML(data=''.join(html_output))
Explanation: Echoes
The code below will display HTML5 video for each of the videos generated in the previos step, and embed it in the IPython notebook. There are "echoes" of people, which are much more pronounced in the Robust PCA video than the GoDec version, likely due to the increased flexibility of an independent Gaussian term. Overall, the effect is pretty cool though not mathematically as good as the GoDec result.
End of explanation
html5_video("IALM", 3)
html5_video("GoDec", 4)
Explanation: If these videos freeze for some reason, just hit refresh and they should start playing.
End of explanation |
521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Code-HotSpots
Welche Dateien werden wie oft geändert?
Input
Git-Versionskontrollsystemdaten einlesen.
Step1: Bereinigen
Nur Produktions-Code ausgewerten.
Step2: Aggregation
HotSpots ermitteln
Step3: Visualisierung
TOP 10 Hotspots anzeigen. | Python Code:
from ozapfdis import git
log = git.log_numstat_existing("../../../dropover/")
log.head()
Explanation: Code-HotSpots
Welche Dateien werden wie oft geändert?
Input
Git-Versionskontrollsystemdaten einlesen.
End of explanation
java_prod = log[log['file'].str.contains("backend/src/main/java/")].copy()
java_prod = java_prod[~java_prod['file'].str.contains("package-info.java")]
java_prod.head()
Explanation: Bereinigen
Nur Produktions-Code ausgewerten.
End of explanation
hotspots = java_prod['file'].value_counts()
hotspots.head()
Explanation: Aggregation
HotSpots ermitteln
End of explanation
hotspots.head(10).plot.barh();
Explanation: Visualisierung
TOP 10 Hotspots anzeigen.
End of explanation |
522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Basics at PyCAR2020
Let's search some text
You already know the components of programming. You have been exercising the reasoning programming relies on for your entire life, probably without even realizing it. Programming is just a way to take the logic you already use on a daily basis and express it in a way a computer can understand and act upon.
It's just learning how to write in a different language.
One very important disclaimer before we start doing just that
Step1: Now tell Python where the file is and to open it. The path to the file can be one variable
Step2: Now create a variable term_count containing the integer value of how many times we've seen it in the text. So far, that's zero
Step3: So any time we want to check to see how many times we've seen our search_term or check where our file_location is, we can use these variables instead of typing out the card value!
If you forget what one of the variables is set to, you can print it out. (The print() command was optional in Python 2.x, but is now required in Python 3.x.) Let's also make a comment to remind us of what this variable does.
Step4: When it's on multiple lines, note the line number and collect all relevant line numbers
Step5: Remember that a string is just a series of characters, not a word or a sentence.
So you can represent those characters as lowercase or uppercase, or see whether the string starts with or ends with specific things.
Try to make our search_term lowercase
Step6: We've decided to standaradize our strings for comparison by making them lowercase. Cool. Now we need to do the comparing. Our open file is ready to explore. And to do that, we'll need a for loop. The loop will assign each line to a variable on the fly, then reference that variable to do stuff we tell it to
Step7: We've read through the whole file, but our variable search_term still holds the open file. Let's close it explicitly, using a tool Python gives us on files
Step8: Now let's set some language so we can make our data more readable.
Step9: Now we can drop our variables into a sentence to help better make sense of our data
Step10: And how often was our term on the same line? Which lines?
Step11: Another way to analyze text is frequency of the words it contains. There may be insights there about what's important, or they may be terms you want to use in a FOIA request
Step12: Remember, we closed our file, so we'll need to open it again and set it to a variable. This is a time when making the file path its own variable saves us the trouble of finding it again
Step13: Once again, we'll need to loop through the lines in the file. This time, we care about inspecting each individual word -- not just whether a term is somewhere in the line
Step14: Set up our baseline variables -- where we'll want to store the top values we're looking for. We'll need one variable for most_common_word, set to None, and another for highest_count, set to zero.
Step15: Now we have a dictionary of every word in The Iliad. And we can spot-check the number of times any word we'd like has appeared by using the word as the key to access that (just remember we made all the keys lowercase) | Python Code:
# This could just as easily be 'horse' or 'Helen' or 'Agamemnon' or `sand` -- or 'Trojan'
search_term = 'Achilles'
Explanation: Python Basics at PyCAR2020
Let's search some text
You already know the components of programming. You have been exercising the reasoning programming relies on for your entire life, probably without even realizing it. Programming is just a way to take the logic you already use on a daily basis and express it in a way a computer can understand and act upon.
It's just learning how to write in a different language.
One very important disclaimer before we start doing just that: Nobody memorizes this stuff. We all have to look stuff up all the time. We don’t expect you to memorize it, either. Ask questions. Ask us to review things we’ve already told you.
(Most of us ask questions we've asked before daily — we just ask them of Google.)
Now for some code. Let's say you want to search 130,000 lines of text for certain tems -- which are most common, how frequently do they occur, how often are they used in a way that's concentrated, which might indicate places you want to look more closely.
No person wants to do that by hand. And people are bad at precisely that kind of work. But it's perfect for a computer.
That length happens to correspond to The Iliad. In groups of two or three, think about a book like that. In your groups, figure out two things:
A whole text is made up of what parts?
What is the first thing you need to know to begin to search a file of text? The second thing? Third thing?
Roughly, the steps might look like this:
1. open the file
2. break the file into individual lines
3. begin to examine each line
4. if the line contains the term you're looking for, capture that
5. does anything else about the line interest you? Is your term there multiple times, for instance?
6. if none of your conditions are met, keep going
This is a program! See, you already know how to program. Now let’s take a minute to step through this the way a computer might.
In Python and other languages, we use the concept of variables to store values. A variable is just an easy way to reference a value we want to keep track of. So if we want to store a search term and how often our program has found it, we probably want to assign them to variables to keep track of them.
Create a string that represents the search term we want to find and assign it to a variable search_term:
End of explanation
file_location = '../basics/data/iliad.txt'
file_to_read = open(file_location)
Explanation: Now tell Python where the file is and to open it. The path to the file can be one variable: file_location. And we can use that to open the file itself and store that opened file in a variable file_to_read
End of explanation
# how many times our search_term has occurred
term_count = 0
Explanation: Now create a variable term_count containing the integer value of how many times we've seen it in the text. So far, that's zero
End of explanation
# how many lines contain at least two of our search_term
multi_term_line = 0
Explanation: So any time we want to check to see how many times we've seen our search_term or check where our file_location is, we can use these variables instead of typing out the card value!
If you forget what one of the variables is set to, you can print it out. (The print() command was optional in Python 2.x, but is now required in Python 3.x.) Let's also make a comment to remind us of what this variable does.
End of explanation
# so far, zero
line_number = 0
# an empty list we hope to fill with lines we might want to explore in greater detail
line_numbers_list = []
Explanation: When it's on multiple lines, note the line number and collect all relevant line numbers
End of explanation
# lowercase because of line.lower() below -- we want to compare lowercase only against lowercase
search_term = search_term.lower()
Explanation: Remember that a string is just a series of characters, not a word or a sentence.
So you can represent those characters as lowercase or uppercase, or see whether the string starts with or ends with specific things.
Try to make our search_term lowercase:
End of explanation
# begin looping line by line through our file
for line in file_to_read:
# increment the line_number
line_number += 1
# make the line lowercase
line = line.lower()
# check whether our search_term is in the line
if search_term in line:
# if it is, use a tool Python gives us to count how many times
# and add that to the number of times we've seen already
term_count += line.count(search_term)
# if it has counted more than one in the line, we know it's there multiple times;
# keep track of that, too
if line.count(search_term) > 1:
# print(line)
multi_term_line += 1
# and add that to the list using a tool Python give us for lists
line_numbers_list.append(line_number)
Explanation: We've decided to standaradize our strings for comparison by making them lowercase. Cool. Now we need to do the comparing. Our open file is ready to explore. And to do that, we'll need a for loop. The loop will assign each line to a variable on the fly, then reference that variable to do stuff we tell it to:
End of explanation
file_to_read.close()
Explanation: We've read through the whole file, but our variable search_term still holds the open file. Let's close it explicitly, using a tool Python gives us on files:
End of explanation
# if this value is zero or more than one or (somehow) negative, this word should be plural
if multi_term_line != 1:
times = 'times'
else:
times = 'time'
Explanation: Now let's set some language so we can make our data more readable.
End of explanation
# we can do it by adding the strings to one another like this:
print(search_term + ' was in The Iliad ' + str(term_count) + ' times')
# or we can use what Python calls `f-strings`, which allow us to drop variables directly into a string;
# doing it this way means we don't have to keep track as much of wayward spaces or
# whether one of our variables is an integer
print(f'{search_term} was in The Iliad {term_count} times')
Explanation: Now we can drop our variables into a sentence to help better make sense of our data:
End of explanation
print(f'It was on the same line multiple times {multi_term_line} {times}')
print(f'it was on lines {line_numbers_list} multiple times')
Explanation: And how often was our term on the same line? Which lines?
End of explanation
# a dictionary to collect words as keys and number of occurrences as the value
most_common_words = {}
Explanation: Another way to analyze text is frequency of the words it contains. There may be insights there about what's important, or they may be terms you want to use in a FOIA request:
Let's make a dictionary to keep track of how often all the words in The Iliad occur:
End of explanation
file_to_read = open(file_location)
Explanation: Remember, we closed our file, so we'll need to open it again and set it to a variable. This is a time when making the file path its own variable saves us the trouble of finding it again:
End of explanation
for line in file_to_read:
line = line.lower()
# make a list of words out of each line using a Python tool for lists
word_list = line.split()
# and loop over each word in the line
for word in word_list:
# if a word is not yet in the most_common_words dictionary, add it
# if the word is there already, increase the count by 1
most_common_words[word] = most_common_words.get(word, 0) + 1
# we now have the words we want to analyze further in a dictionary -- so we don't need that file anymore. So let's close it
file_to_read.close()
Explanation: Once again, we'll need to loop through the lines in the file. This time, we care about inspecting each individual word -- not just whether a term is somewhere in the line:
End of explanation
most_common_word = None
highest_count = 0
Explanation: Set up our baseline variables -- where we'll want to store the top values we're looking for. We'll need one variable for most_common_word, set to None, and another for highest_count, set to zero.
End of explanation
print(most_common_words["homer"])
print(most_common_words['paris'])
print(most_common_words['hector'])
print(most_common_words['helen'])
print(most_common_words['sand'])
print(most_common_words['trojan'])
for word, count in most_common_words.items():
# as we go through the most_common_words dictionary,
# set the word and the count that's the biggest we've seen so far
if highest_count is None or count > highest_count:
most_common_word = word
highest_count = count
print(f'The most common word in The Iliad is: {most_common_word}')
print(f'It is in The Iliad {highest_count} times')
print('Wow! How cool is that?')
Explanation: Now we have a dictionary of every word in The Iliad. And we can spot-check the number of times any word we'd like has appeared by using the word as the key to access that (just remember we made all the keys lowercase):
End of explanation |
523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating temporary files with unique names securely, so they cannot be guessed by someone wanting to break the application or steal the data, is challenging. The tempfile module provides several functions for creating temporary file system resources securely. TemporaryFile() opens and returns an unnamed file, NamedTemporaryFile() opens and returns a named file, SpooledTemporaryFile holds its content in memory before writing to disk, and TemporaryDirectory is a context manager that removes the directory when the context is closed.
Temporary File
Step1: Named File
Step2: Spooled File
Step3: Temporary Directories
Step4: Predicting Name
Step5: Temporary File Location | Python Code:
import os
import tempfile
print('Building a filename with PID:')
filename = '/tmp/guess_my_name.{}.txt'.format(os.getpid())
with open(filename, 'w+b') as temp:
print('temp:')
print(' {!r}'.format(temp))
print('temp.name:')
print(' {!r}'.format(temp.name))
# Clean up the temporary file yourself.
os.remove(filename)
print()
print('TemporaryFile:')
with tempfile.TemporaryFile() as temp:
print('temp:')
print(' {!r}'.format(temp))
print('temp.name:')
print(' {!r}'.format(temp.name))
import os
import tempfile
with tempfile.TemporaryFile() as temp:
temp.write(b'Some data')
temp.seek(0)
print(temp.read())
import tempfile
with tempfile.TemporaryFile(mode='w+t') as f:
f.writelines(['first\n', 'second\n'])
f.seek(0)
for line in f:
print(line.rstrip())
Explanation: Creating temporary files with unique names securely, so they cannot be guessed by someone wanting to break the application or steal the data, is challenging. The tempfile module provides several functions for creating temporary file system resources securely. TemporaryFile() opens and returns an unnamed file, NamedTemporaryFile() opens and returns a named file, SpooledTemporaryFile holds its content in memory before writing to disk, and TemporaryDirectory is a context manager that removes the directory when the context is closed.
Temporary File
End of explanation
import os
import pathlib
import tempfile
with tempfile.NamedTemporaryFile() as temp:
print('temp:')
print(' {!r}'.format(temp))
print('temp.name:')
print(' {!r}'.format(temp.name))
f = pathlib.Path(temp.name)
print('Exists after close:', f.exists())
Explanation: Named File
End of explanation
import tempfile
with tempfile.SpooledTemporaryFile(max_size=100,
mode='w+t',
encoding='utf-8') as temp:
print('temp: {!r}'.format(temp))
for i in range(3):
temp.write('This line is repeated over and over.\n')
print(temp._rolled, temp._file)
import tempfile
with tempfile.SpooledTemporaryFile(max_size=1000,
mode='w+t',
encoding='utf-8') as temp:
print('temp: {!r}'.format(temp))
for i in range(3):
temp.write('This line is repeated over and over.\n')
print(temp._rolled, temp._file)
print('rolling over')
temp.rollover()
print(temp._rolled, temp._file)
Explanation: Spooled File
End of explanation
import pathlib
import tempfile
with tempfile.TemporaryDirectory() as directory_name:
the_dir = pathlib.Path(directory_name)
print(the_dir)
a_file = the_dir / 'a_file.txt'
a_file.write_text('This file is deleted.')
print('Directory exists after?', the_dir.exists())
print('Contents after:', list(the_dir.glob('*')))
Explanation: Temporary Directories
End of explanation
import tempfile
with tempfile.NamedTemporaryFile(suffix='_suffix',
prefix='prefix_',
dir='/tmp') as temp:
print('temp:')
print(' ', temp)
print('temp.name:')
print(' ', temp.name)
Explanation: Predicting Name
End of explanation
import tempfile
print('gettempdir():', tempfile.gettempdir())
print('gettempprefix():', tempfile.gettempprefix())
Explanation: Temporary File Location
End of explanation |
524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this tutorial you'll learn all about histograms and density plots.
Set up the notebook
As always, we begin by setting up the coding environment. (This code is hidden, but you can un-hide it by clicking on the "Code" button immediately below this text, on the right.)
Step1: Select a dataset
We'll work with a dataset of 150 different flowers, or 50 each from three different species of iris (Iris setosa, Iris versicolor, and Iris virginica).
Load and examine the data
Each row in the dataset corresponds to a different flower. There are four measurements
Step2: Histograms
Say we would like to create a histogram to see how petal length varies in iris flowers. We can do this with the sns.histplot command.
Step3: In the code cell above, we had to supply the command with the column we'd like to plot (in this case, we chose 'Petal Length (cm)').
Density plots
The next type of plot is a kernel density estimate (KDE) plot. In case you're not familiar with KDE plots, you can think of it as a smoothed histogram.
To make a KDE plot, we use the sns.kdeplot command. Setting shade=True colors the area below the curve (and data= chooses the column we would like to plot).
Step4: 2D KDE plots
We're not restricted to a single column when creating a KDE plot. We can create a two-dimensional (2D) KDE plot with the sns.jointplot command.
In the plot below, the color-coding shows us how likely we are to see different combinations of sepal width and petal length, where darker parts of the figure are more likely.
Step5: Note that in addition to the 2D KDE plot in the center,
- the curve at the top of the figure is a KDE plot for the data on the x-axis (in this case, iris_data['Petal Length (cm)']), and
- the curve on the right of the figure is a KDE plot for the data on the y-axis (in this case, iris_data['Sepal Width (cm)']).
Color-coded plots
For the next part of the tutorial, we'll create plots to understand differences between the species.
We can create three different histograms (one for each species) of petal length by using the sns.histplot command (as above).
- data= provides the name of the variable that we used to read in the data
- x= sets the name of column with the data we want to plot
- hue= sets the column we'll use to split the data into different histograms
Step6: We can also create a KDE plot for each species by using sns.kdeplot (as above). The functionality for data, x, and hue are identical to when we used sns.histplot above. Additionally, we set shade=True to color the area below each curve. | Python Code:
#$HIDE$
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
Explanation: In this tutorial you'll learn all about histograms and density plots.
Set up the notebook
As always, we begin by setting up the coding environment. (This code is hidden, but you can un-hide it by clicking on the "Code" button immediately below this text, on the right.)
End of explanation
# Path of the file to read
iris_filepath = "../input/iris.csv"
# Read the file into a variable iris_data
iris_data = pd.read_csv(iris_filepath, index_col="Id")
# Print the first 5 rows of the data
iris_data.head()
Explanation: Select a dataset
We'll work with a dataset of 150 different flowers, or 50 each from three different species of iris (Iris setosa, Iris versicolor, and Iris virginica).
Load and examine the data
Each row in the dataset corresponds to a different flower. There are four measurements: the sepal length and width, along with the petal length and width. We also keep track of the corresponding species.
End of explanation
# Histogram
sns.histplot(iris_data['Petal Length (cm)'])
Explanation: Histograms
Say we would like to create a histogram to see how petal length varies in iris flowers. We can do this with the sns.histplot command.
End of explanation
# KDE plot
sns.kdeplot(data=iris_data['Petal Length (cm)'], shade=True)
Explanation: In the code cell above, we had to supply the command with the column we'd like to plot (in this case, we chose 'Petal Length (cm)').
Density plots
The next type of plot is a kernel density estimate (KDE) plot. In case you're not familiar with KDE plots, you can think of it as a smoothed histogram.
To make a KDE plot, we use the sns.kdeplot command. Setting shade=True colors the area below the curve (and data= chooses the column we would like to plot).
End of explanation
# 2D KDE plot
sns.jointplot(x=iris_data['Petal Length (cm)'], y=iris_data['Sepal Width (cm)'], kind="kde")
Explanation: 2D KDE plots
We're not restricted to a single column when creating a KDE plot. We can create a two-dimensional (2D) KDE plot with the sns.jointplot command.
In the plot below, the color-coding shows us how likely we are to see different combinations of sepal width and petal length, where darker parts of the figure are more likely.
End of explanation
# Histograms for each species
sns.histplot(data=iris_data, x='Petal Length (cm)', hue='Species')
# Add title
plt.title("Histogram of Petal Lengths, by Species")
Explanation: Note that in addition to the 2D KDE plot in the center,
- the curve at the top of the figure is a KDE plot for the data on the x-axis (in this case, iris_data['Petal Length (cm)']), and
- the curve on the right of the figure is a KDE plot for the data on the y-axis (in this case, iris_data['Sepal Width (cm)']).
Color-coded plots
For the next part of the tutorial, we'll create plots to understand differences between the species.
We can create three different histograms (one for each species) of petal length by using the sns.histplot command (as above).
- data= provides the name of the variable that we used to read in the data
- x= sets the name of column with the data we want to plot
- hue= sets the column we'll use to split the data into different histograms
End of explanation
# KDE plots for each species
sns.kdeplot(data=iris_data, x='Petal Length (cm)', hue='Species', shade=True)
# Add title
plt.title("Distribution of Petal Lengths, by Species")
Explanation: We can also create a KDE plot for each species by using sns.kdeplot (as above). The functionality for data, x, and hue are identical to when we used sns.histplot above. Additionally, we set shade=True to color the area below each curve.
End of explanation |
525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The SparkContext.addPyFiles() function can be used to add py files. We can define objects and variables in these files and make them available to the Spark cluster.
Create a SparkContext object
Step1: Add py files
Step2: Use my_module.py
We can import my_module as a python module | Python Code:
from pyspark import SparkConf, SparkContext, SparkFiles
from pyspark.sql import SparkSession
sc = SparkContext(conf=SparkConf())
Explanation: The SparkContext.addPyFiles() function can be used to add py files. We can define objects and variables in these files and make them available to the Spark cluster.
Create a SparkContext object
End of explanation
sc.addPyFile('pyFiles/my_module.py')
SparkFiles.get('my_module.py')
Explanation: Add py files
End of explanation
from my_module import *
addPyFiles_is_successfull()
sum_two_variables(4,5)
Explanation: Use my_module.py
We can import my_module as a python module
End of explanation |
526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Knows What It Knows (KWIK)
A framework for self-aware learning
Combines elements of Probably Approximately Correct (PAC) and mistake-bound models
Useful for active learning
Motivation
Polynomial sample complexity guarantee algorithms
Rmax algorithm that estimates transition probabilities for each state-action-next action of MDP
Accuracy bound using Hoeffding bounds
KWIK
Only makes accurate predictions
Can opt-out of prediction by saying "i don't know", which is polynomially bound
Example 1
Step1: Example 1
2 patrons
Step2: Example 2
3 patrons
Step3: Another example
This time the composition is (Normal patron, I, P) | Python Code:
from collections import Counter
class Kwik:
def __init__(self, number_of_patrons):
# Init
self.current_i_do_not_knows = 0
self.number_of_patrons = number_of_patrons
self.max_i_do_not_knows = self.number_of_patrons * (self.number_of_patrons - 1)
self.instigator = None
self.peacemaker = None
self.candidates = {candidate_type: set(range(self.number_of_patrons))
for candidate_type in ['instigator', 'peacemaker']}
self.peacemaker_candidates = set(range(self.number_of_patrons))
self.solved = False
self.memory = {}
def _remove_candidate(self, patron_index, candidate_type):
if not self.solved and not (candidate_type == 'instigator' and self.instigator is not None) \
and not (candidate_type == 'peacemaker' and self.peacemaker is not None):
candidates_for_type = self.candidates[candidate_type]
candidates_for_type.discard(patron_index)
if len(candidates_for_type) == 1:
remaining = candidates_for_type.pop()
if candidate_type == 'instigator':
self.instigator = remaining
if self.peacemaker is not None:
self.solved = True
else:
self._remove_candidate(remaining, 'peacemaker')
else:
self.peacemaker = remaining
if self.instigator is not None:
self.solved = True
else:
self._remove_candidate(remaining, 'instigator')
def _learn(self, at_establishment, fight_occurred, counts):
if counts[True] == 1 and fight_occurred:
# If only one person is there and a fight breaks out -> he's the instigator
instigator = at_establishment.index(True)
self.instigator = instigator
self.candidates['instigator'] = set()
self._remove_candidate(instigator, 'peacemaker')
elif counts[True] == 1 and not fight_occurred:
# If only one person is there and no fight breaks out -> he's NOT the instigator
# remove him from the list of instigators
index = at_establishment.index(True)
self._remove_candidate(index, 'instigator')
else:
# Some people are present, eliminate candidates
for patron_index, patron_present in enumerate(at_establishment):
# If the patron was present
if patron_present:
if fight_occurred:
# The patron is not a peacemaker
self._remove_candidate(patron_index, 'peacemaker')
else:
# The patron is not an instigator
# TODO: this is not correct
self._remove_candidate(patron_index, 'instigator')
def _all_known(self, at_establishment):
if at_establishment[self.instigator]:
if at_establishment[self.peacemaker]:
return 0
else:
return 1
else:
return 0
def _determine_and_learn(self, at_establishment, fight_occurred):
counts = Counter(at_establishment)
if len(counts) == 1:
# Everyone is present so no fight and nothing to learn
return 0
else:
self._learn(at_establishment, fight_occurred, counts)
if self.current_i_do_not_knows == self.max_i_do_not_knows:
raise ValueError("Exhausted ⟂")
else:
self.current_i_do_not_knows += 1
return -1
def run_instance(self, at_establishment, fight_occurred):
# Make it hashable
at_establishment = tuple(at_establishment)
if at_establishment in self.memory:
# We've seen this before, return from memory
return int(self.memory[at_establishment])
else:
self.memory[at_establishment] = fight_occurred
if self.solved:
# Instigator and peacemaker are already known
return self._all_known(at_establishment)
else:
# Another case
return self._determine_and_learn(at_establishment, fight_occurred)
Explanation: Knows What It Knows (KWIK)
A framework for self-aware learning
Combines elements of Probably Approximately Correct (PAC) and mistake-bound models
Useful for active learning
Motivation
Polynomial sample complexity guarantee algorithms
Rmax algorithm that estimates transition probabilities for each state-action-next action of MDP
Accuracy bound using Hoeffding bounds
KWIK
Only makes accurate predictions
Can opt-out of prediction by saying "i don't know", which is polynomially bound
Example 1
End of explanation
learner = Kwik(number_of_patrons=2)
# both patrons 0 and 1 are candidates of being both I and P
learner.candidates
# We haven't memorized anything
learner.memory
# P and I present and no fight
# Since we know that there's at least one I and one P
# if everyone is present or absent we haven't learned anything
# and we know that there's no fight
learner.run_instance([True, True], False)
# Memorize this instance
learner.memory
# Nothing was learnt from this case
learner.candidates
# Patron 1 present and patron 0 absent
# We still don't know who is who so return -1 (don't know)
learner.run_instance([True, False], True)
# Memorize
learner.memory
# Since a fight broke out we know 0 was the I
# and we can deduce that 1 is P
learner.candidates
learner.instigator
learner.peacemaker
Explanation: Example 1
2 patrons
End of explanation
learner = Kwik(3)
learner.candidates
learner.run_instance([True, True, True], False)
learner.run_instance([False, False, True], False)
learner.candidates
learner.run_instance([True, True, False], True)
learner.candidates
learner.peacemaker
learner.run_instance([False, True, True], False)
# Is this correct?
# We eliminate patron 1 as an instigator and deduce 0 is I
learner.candidates
learner.instigator
learner.run_instance([True, False, True], False)
learner.run_instance([True, False, False], True)
learner.run_instance([True, False, False], True)
Explanation: Example 2
3 patrons
End of explanation
learner = Kwik(3)
learner.candidates
learner.run_instance([False, False, True], False)
learner.candidates
learner.run_instance([False, True, True], False)
learner.candidates
# This is not correct
learner.instigator
learner.run_instance([True, False, True], False)
learner.candidates
learner.run_instance([True, True, False], True)
learner.candidates
learner.peacemaker
learner.instigator
# Incorrect
learner.run_instance([False, True, False], True)
Explanation: Another example
This time the composition is (Normal patron, I, P)
End of explanation |
527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
First a non-learning test
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
Step4: What are the metrics for "holding the position"? | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = 252*2 + 28
STARTING_DAYS_AHEAD = 20
POSSIBLE_FRACTIONS = [0.0, 1.0]
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.9999,
dyna_iterations=0,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
Explanation: In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 15
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
Explanation: Let's show the symbols data, to see how good the recommender has to be.
End of explanation
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: Let's run the trained agent, with the test set
First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
End of explanation
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
import pickle
with open('../../data/simple_q_learner.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
Explanation: What are the metrics for "holding the position"?
End of explanation |
528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-esm4', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: GFDL-ESM4
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames Luhman1999
Title
Step1: Table 1 - Data for Spectroscopic Sample in ρ Ophiuchi
Step2: Save data | Python Code:
import warnings
warnings.filterwarnings("ignore")
from astropy.io import ascii
import pandas as pd
Explanation: ApJdataFrames Luhman1999
Title: Low-Mass Star Formation and the Initial Mass Function in the ρ Ophiuchi Cloud Core
Authors: K. L. Luhman and G.H. Rieke
Data is from this paper:
http://iopscience.iop.org/0004-637X/525/1/440/fulltext/
End of explanation
names = ["BKLT","Other ID","RA_1950","DEC_1950","SpT_prev","SpT_IR","SpT_adopted",
"Teff","AJ","Lbol","J-H","H-K","K","rK","BrGamma"]
tbl1 = pd.read_csv("http://iopscience.iop.org/0004-637X/525/1/440/fulltext/40180.tb1.txt",
sep="\t", na_values="\ldots", skiprows=1, names=names)
tbl1.RA_1950 = "16 "+tbl1.RA_1950
tbl1.DEC_1950 = "-24 "+tbl1.DEC_1950
tbl1.head()
len(tbl1)
Explanation: Table 1 - Data for Spectroscopic Sample in ρ Ophiuchi
End of explanation
! mkdir ../data/Luhman1999
tbl1.to_csv("../data/Luhman1999/tbl1.csv", index=False, sep='\t')
Explanation: Save data
End of explanation |
530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import necessary modules
Step1: Filepath management
Step2: Load the data from the hdf store
Step3: Visualize the data
Step4: Adding in missing times (zero volume minutes)
Before evaluating goals, we need to fill in the missing time steps.
These missing time steps have zero trading volume.
So, all of the prices for these steps are equal to the last
closing price. All of the volumes are equal to zero.
Step7: Logic for getting goal information
Step8: Get goal tags for all data
Step9: Save data to hdf | Python Code:
import time
import pandas as pd
import numpy as np
import datetime as dt
from collections import OrderedDict
from copy import copy
import warnings
import matplotlib.pyplot as plt
import seaborn as sns
from pprint import pprint
%matplotlib inline
Explanation: Import necessary modules
End of explanation
project_dir = r'/Users/hudson/Code/marketModel/'
Explanation: Filepath management
End of explanation
stock_data = pd.read_hdf(project_dir + 'data/stock_data/raw_stock_data.hdf', 'table')
symbols = stock_data.reset_index().ticker.unique()
pprint(symbols)
symbol = np.random.choice(symbols)
print 'symbol: ' + symbol
stock_data_vis = stock_data.loc[symbol]
print stock_data_vis.head()
print stock_data_vis.describe()
print stock_data.reset_index().loc[:,('ticker', 'timestamp')].groupby('ticker').agg([len, np.min, np.max])
Explanation: Load the data from the hdf store
End of explanation
# First keep the time index so we can see the time frame
stock_data_vis.close.plot(label=symbol)
plt.legend(bbox_to_anchor=(1.25, .5))
plt.tight_layout()
plt.ylabel("Close ($)")
sns.despine()
# Now drop the time index so we can see the actual stock movement
stock_data_vis.reset_index().close.plot(label=symbol)
stock_data_vis.reset_index().close.rolling(20).mean().plot(label='20 Min. Moving Avg.')
plt.legend(bbox_to_anchor=(1.25, .5))
plt.tight_layout()
plt.ylabel("Close ($)")
sns.despine()
Explanation: Visualize the data
End of explanation
# First reshape the index and group by ticker
stock_data_final = stock_data.reset_index(level=0)
grouped_stock_data = stock_data_final.groupby('ticker')
## Before evaluating goals, we need to fill in the missing time steps.
## These missing time steps have zero trading volume.
## So, all of the prices for these steps are equal to the last
## closing price. All of the volumes are equal to zero.
stock_data_with_all_minutes = []
for name, group in grouped_stock_data:
# Create a dataframe of all the times
min_time, max_time = group.index.min(), group.index.max()
timeDiff = (max_time - min_time).components
numMinutes = timeDiff.days*24*60 + timeDiff.hours*60 + timeDiff.minutes
#alltimesIdx = pd.DatetimeIndex(start=min_time, freq=pd.tseries.offsets.Minute(1), periods=numMinutes)
alltimesIdx = pd.DatetimeIndex(start=min_time, freq=pd.Timedelta(minutes=1), periods=numMinutes)
alltimes = pd.DataFrame(index=alltimesIdx)
# Drop minutes outside of 9:30am - 4:00pm est
alltimes = alltimes.between_time('09:30','16:00')
# Join on the original dataframe
alltimes_group = alltimes.join(group)
# Forward fill the NaN closing prices
alltimes_group.loc[:,('ticker', 'close')] = alltimes_group.loc[:,('ticker', 'close')].\
fillna(method='ffill', axis=0)
# Assign all price variables to the close price
alltimes_group.loc[:,'open':'close'] = alltimes_group.loc[:,'open':'close'].\
fillna(method='bfill', axis=1)
# Assign all NaN volumes to zero
alltimes_group.loc[:, 'volume'] = alltimes_group.loc[:, 'volume'].fillna(value=0)
stock_data_with_all_minutes.append(alltimes_group)
stock_data_with_all_minutes = pd.concat(stock_data_with_all_minutes)
stock_data_with_all_minutes.index.name = 'timestamp'
stock_data_with_all_minutes.reset_index().loc[:,('ticker', 'timestamp')].groupby('ticker').agg([len, min, max])
Explanation: Adding in missing times (zero volume minutes)
Before evaluating goals, we need to fill in the missing time steps.
These missing time steps have zero trading volume.
So, all of the prices for these steps are equal to the last
closing price. All of the volumes are equal to zero.
End of explanation
def get_min_max(data, starttime, endtime):
This function takes data for a specific ticker and returns the min and max prices.
subdata = data.loc[starttime:endtime]
return (subdata.low.min(), subdata.high.max())
def is_goal_met(data, timestep, goal_time_from_step, goal_duration, goal_raise_frac = 0.1, goal_drop_frac=0.1):
This function takes data for a specific ticker, a time index for that ticker, goal parameters, and
returns a boolean indicating whether or not the goal is satisfied for that timestep.
# Assign a status message to record various special cases
statusMessage = ""
#Convert time variables to appropriate numpyt date types
td_goal_time_from_step = np.timedelta64(goal_time_from_step, 'm')
td_goal_duration = np.timedelta64(goal_duration, 'm')
# Calculate the start and end times of the goal time window
goal_starttime = np.datetime64(timestep + td_goal_time_from_step)
goal_endtime = np.datetime64(goal_starttime + td_goal_duration)
if goal_endtime > np.datetime64(data.index.max()):
statusMessage = "Goal time window end time lies beyond available data."
# Get the data for goal checking in that time window
subdata = data.loc[goal_starttime:goal_endtime]
# Get the minimum and maximum prices for the goal time window
min_price, max_price = get_min_max(data, goal_starttime, goal_endtime)
if np.isnan(min_price) | np.isnan(max_price):
# Zero trading volume in time window. Get last prices.
most_recent_time_with_transactions = np.max(data.loc[:goal_starttime].index)
if most_recent_time_with_transactions == timestep:
statusMessage = statusMessage + " Zero trading volume between current timestep and goal time window end."
return {'timestamp': timestep,
'goal_met': False,
'raise_goal_met': False,
'drop_goal_met': True,
'statusMessage': statusMessage}
else:
min_price, max_price = data.loc[timestep, 'low'], data.loc[timestep, 'high']
# Determine if goals were met
# TODO: is this the right reference for the 'current price'?
current_price = np.mean(data.loc[timestep, ['high', 'close']])
# Is raise goal met? Return true if max price at least (1+goal_raise_frac) * current_price
is_raise_goal_met = max_price >= (1+goal_raise_frac) * current_price
# Is drop goal met? Return true if min price at least (1-goal_drop_frac) * current_price
is_drop_goal_met = min_price >= (1-goal_drop_frac) * current_price
# Return dict containing raise and drop goals and product for convenience
return {'timestamp': timestep,
'goal_met': is_raise_goal_met * is_drop_goal_met,
'raise_goal_met': is_raise_goal_met,
'drop_goal_met': is_drop_goal_met,
'statusMessage': statusMessage}
# test get_min_max
get_min_max(stock_data_vis, '2017-08-28', '2017-08-30')
# test is_goal_met
random_time_index = np.random.choice(stock_data_vis.index.values)
print "Random time: " + str(random_time_index)
%timeit is_goal_met(stock_data_vis, random_time_index, 0, 1000)
Explanation: Logic for getting goal information
End of explanation
# first define a function that tags for one ticker
def get_tagged_stock_data(data,
ticker,
goal_time_from_step,
goal_duration,
goal_raise_frac = 0.1,
goal_drop_frac=0.1):
# Loop over the timestamps building a dictionary of the tagging information
tagged_stock_data = []
for timestep in data.index:
goal_dict = is_goal_met(data, timestep, goal_time_from_step, goal_duration, goal_raise_frac, goal_drop_frac)
tagged_stock_data.append(goal_dict)
# Convert to pandas and return
return pd.DataFrame(tagged_stock_data).set_index('timestamp')
start_delay = 0 # minutes
duration = 120 # minutes (next half hour)
raise_fraction = 0.05
drop_fraction = 0.05
list_tagged_tickers = []
for i, (symbol, group) in enumerate(stock_data_with_all_minutes.groupby('ticker')):
print "Progress: {} of {} tickers. Current ticker: {}".format(i, len(symbols), symbol)
# get the tag data for this symbol
tag_data = get_tagged_stock_data(group, symbol, start_delay, duration, raise_fraction, drop_fraction)
# join tag data back onto the group data
merged_data = group.join(tag_data)
# Append to the list of tagged data
list_tagged_tickers.append(merged_data)
# Concatenate all the groups
all_tagged_data = pd.concat(list_tagged_tickers)
#print all_tagged_data.goal_met.value_counts()
#print all_tagged_data.statusMessage.value_counts()
print all_tagged_data.groupby('ticker').agg({'goal_met': lambda x: x[x].shape[0],
'raise_goal_met': lambda x: x[x].shape[0],
'drop_goal_met': lambda x: x[x].shape[0]}).agg(np.sum)
Explanation: Get goal tags for all data
End of explanation
all_tagged_data.to_hdf(project_dir + 'data/stock_data/tagged_stock_data.hdf', 'table')
Explanation: Save data to hdf
End of explanation |
531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Pickle to manage memory in Python
Author
Step1: Function to track memory utilization
Step2: Create a dataframe with random integers between 0 and 1000
Step3: Create Pickle dump
Step4: Remove the variable from memory
Step5: Restore the variable from disk | Python Code:
import gc
import pickle
import psutil
import numpy as np
import pandas as pd
Explanation: Using Pickle to manage memory in Python
Author: Dr. Rahul Remanan, CEO, Moad Computer
Run this notebook in Google Colab
Import dependencies
End of explanation
def memory_utilization():
print('Current memory utilization: {}% ...'.format(psutil.virtual_memory().percent))
Explanation: Function to track memory utilization
End of explanation
memory_utilization()
var=pd.DataFrame(np.random.randint(0,1000,size=(int(2.5e8),2)),columns=['var1','var2'])
print(var.head())
memory_utilization()
Explanation: Create a dataframe with random integers between 0 and 1000
End of explanation
pickle.dump(var,open('var.pkl','wb'))
memory_utilization()
Explanation: Create Pickle dump
End of explanation
del var
_=gc.collect()
memory_utilization()
Explanation: Remove the variable from memory
End of explanation
var=pickle.load(open('var.pkl','rb'))
memory_utilization()
print(var.head())
Explanation: Restore the variable from disk
End of explanation |
532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Neural Tangents Cookbook
In this notebook we explore the training of infinitely-wide, Bayesian, neural networks using a library called Neural Tangents. Recent work has shown that such networks are Gaussian Processes with a particular compositional kernel called the NNGP kernel. More recently, it was shown that predictions resulting from these networks following Gradient Descent are Gaussian with a distribution that can be computed in closed form using the Neural Tangent Kernel. Neural Tangents provides a high level library to compute NNGP and NT kernels for a wide range of neural networks. See the paper for a more detailed description of the library itself.
Our goal will be to train an ensemble of neural networks on a simple synthetic task. We'll then compare the results of this ensemble with the prediction of the NTK theory. Finally, we'll play around with different neural network architectures to see how this affects the resulting kernel. However, Neural Tangents is built on JAX which may be new to you. To get warmed up with JAX, we'll start out by generating some data.
Warm Up
Step2: Now let's set up some constants that will define our dataset. In particular, we will use a small training set of 5 points and 50 tests points. Finally, we'll define a noise scale and target function.
Step3: Next we generate our training data. We know that we will want to have randomly chosen $x$'s and noise. To generate random numbers in JAX, we have to explicitly evolve the random number state using random.split each time we draw a random number.
Then we'll want to generate the random inputs, apply the target function, and add the random noise.
Step4: Finally, we want to generate our test data. The $x$'s will be linearly spaced with no noise. Note, we want the inputs to have shape (N, 1) instead of (N,) since we treat this as a model with one feature.
Step5: Having generated our data, let's plot it.
Step6: What a good looking dataset! Let's train a neural network on it.
Defining a Neural Network
The first thing we need to do is define a neural network. We'll start out with a simple fully-connected network using Erf nonlinearities. We describe our network using our neural network library that shares syntax and code with JAX's own library called stax. Layers in jax.example_libraries.stax are pairs of functions (init_fn, apply_fn) where init_fn(key, input_shape) draws parameters randomly and apply_fn(params, xs) computes outputs of the function for specific inputs. These layers can be composed using serial and parallel combinators to produce new (init_fn, apply_fn) pairs.
Similarly, layers in neural_tangents.stax are triplets of functions (init_fn, apply_fn, kernel_fn) where the first two functions are the same as their stax equivalent but the third function, kernel_fn, computes infinite-width GP kernels corresponding to the layer. Again these layers can be composed using serial and parallel to build kernels for complicated architectures. Fully-connected layers in neural_tangents.stax are created using the Dense layer which is defined by,
$$z^{l+1}i = \frac{\sigma_w}{\sqrt{N{in}}} \sum_j W_{ij} z^{l}i + \sigma_b b_i$$
where $W{ij}, b_i\sim\mathcal N(0,1)$ at initialization and $\sigma_w, \sigma_b$ sets the scales of the weights and biases respectively.
Step7: Here the lines apply_fn = jit(apply_fn) and kernel_fn = jit(kernel_fn, static_argnums=(2,)) use a JAX feature that compiles functions so that they are executed as single calls to the GPU.
Next, let's take several random draws of the parameters of the network and plot what the functions look like.
Step8: Next we can look at the exact prior over functions in the infinite-width limit using the kernel_fn. The kernel function has the signature kernel = kernel_fn(x_1, x_2) which computes the kernel between two sets of inputs x_1 and x_2. The kernel_fn can compute two different kernels
Step9: Infinite Width Inference
We can use the infinite-width GP defined above to perform exact Bayesian inference using the infinite width network. To do this, we will use the function neural_tangents.predict.gradient_descent_mse_ensemble that performs this inference exactly. predict_fn = nt.predict.gradient_descent_mse_ensemble(kernel_fn, train_xs, train_ys); mean, cov = predict_fn(x_test=test_xs, get='ntk', compute_cov=True) computes the mean and covariance of the network evaluated on the test points after training. This predict_fn function includes two different modes
Step10: We see that our posterior exactly fits all of the training points as expected. We also see that the there is significant uncertainty in the predictions between the points in the middle.
Next, we would like to compute the result of doing gradient descent on our infinite network for an infinite amount of time. To do this, we will use the "NTK" inference mode. Note that otherwise the call to predict_fn looks identical. We will compare the result of true Bayesian inference with gradient descent.
Step11: We see that while the result of gradient descent and bayesian inference are similar, they are not identical.
Not only can we do inference at infinite times, but we can also perform finite-time inference. We will use this to predict the mean of the train and test losses over the course of training. To compute the mean MSE loss, we need to access the mean and variance of our networks predictions as a function of time. To do this, we call our predict_fn function with finite times t (as opposed to using the default value t=None earlier, considered as infinite time). Note that predict can act on both scalar and array values, so we simply invoke the function.
Step12: Training a Neural Network
We will now compare the results of gradient descent GP-inference computed above to the result of training an ensemble of finite width neural networks. We first train a single network drawn from the prior and then we will show how to generalize this to an ensemble. To do this we use JAX's gradient descent optimizer. Optimizers are described by a triple of functions (init_fn, update_fn, get_params). Here, init_fn(params) takes an initial set of parameters and returns an optimizer state that can include extra information (like the velocity in the momentum optimizer). opt_update(step, grads, state) takes a new state and updates it using gradients. Finally, get_params(state) returns the parameters for a given state.
Step13: Next we need to define a loss and a gradient of the loss. We'll use an MSE loss. The function grad is another JAX function that takes a function and returns a new function that computes its gradient.
Step14: Now we want to actually train the network. To do this we just initialize the optimizer state and then update it however many times we want. We'll record the train and test loss after each step.
Step15: Finally, lets plot the loss over the course of training and the function after training compared with our GP inference.
Step16: Training an Ensemble of Neural Networks
The draw above certainly seems consistent with exact inference. However, as discussed above to make a more quantitative comparison we want to train an ensemble of finite-width networks. We could use a for-loop to loop over all the different instantiations that we wanted to evaluate. However, it is more convenient and efficient to use another JAX feature called vmap. vmap takes a function and vectorizes it over a batch dimension. In this case, we're going to add a batch dimension to our training loop so that we train a whole batch of neural networks at once. To do that, let's first wrap our whole training loop in a function. The function will take a random state and train a network based on that random state.
Step17: We can test it to make sure that we get a trained network.
Step18: Now, to train an ensemble we just have to apply vmap to train_network. The resulting function will take a vector of random states and will train one network for each random state in the vector.
Step19: Let's plot the empirical standard deviation in the loss over the course of training as well as well as for the function after gradient descent compared with the exact inference above.
Step20: We see pretty nice agreement between exact inference of the infinite-width networks and the result of training an ensemble! Note that we do see some deviations in the training loss at the end of training. This is ameliorated by using a wider network.
Playing Around with the Architecture
To demonstrate the ease of specifying more exotic architecture, can try to reproduce the above results with different choices of architecture. For fn, let's see what happens if we add residual connections.
Step21: Given this new architecture, let's train a new ensemble of models.
Step22: Finally, let's repeat our NTK-GP inference
Step23: Now let's draw the result! | Python Code:
!pip install --upgrade pip
!pip install --upgrade jax[cuda11_cudnn805] -f https://storage.googleapis.com/jax-releases/jax_releases.html
!pip install -q git+https://www.github.com/google/neural-tangents
import jax.numpy as np
from jax import random
from jax.example_libraries import optimizers
from jax import jit, grad, vmap
import functools
import neural_tangents as nt
from neural_tangents import stax
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'svg')
import matplotlib
import seaborn as sns
sns.set(font_scale=1.3)
sns.set_style("darkgrid", {"axes.facecolor": ".95"})
import matplotlib.pyplot as plt
def format_plot(x=None, y=None):
# plt.grid(False)
ax = plt.gca()
if x is not None:
plt.xlabel(x, fontsize=20)
if y is not None:
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
legend = functools.partial(plt.legend, fontsize=10)
def plot_fn(train, test, *fs):
train_xs, train_ys = train
plt.plot(train_xs, train_ys, 'ro', markersize=10, label='train')
if test != None:
test_xs, test_ys = test
plt.plot(test_xs, test_ys, 'k--', linewidth=3, label='$f(x)$')
for f in fs:
plt.plot(test_xs, f(test_xs), '-', linewidth=3)
plt.xlim([-np.pi, np.pi])
plt.ylim([-1.5, 1.5])
format_plot('$x$', '$f$')
def loss_fn(predict_fn, ys, t, xs=None):
mean, cov = predict_fn(t=t, get='ntk', x_test=xs, compute_cov=True)
mean = np.reshape(mean, mean.shape[:1] + (-1,))
var = np.diagonal(cov, axis1=1, axis2=2)
ys = np.reshape(ys, (1, -1))
mean_predictions = 0.5 * np.mean(ys ** 2 - 2 * mean * ys + var + mean ** 2,
axis=1)
return mean_predictions
Explanation: <a href="https://colab.research.google.com/github/google/neural-tangents/blob/main/notebooks/neural_tangents_cookbook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Imports and Utils
End of explanation
key = random.PRNGKey(10)
Explanation: Neural Tangents Cookbook
In this notebook we explore the training of infinitely-wide, Bayesian, neural networks using a library called Neural Tangents. Recent work has shown that such networks are Gaussian Processes with a particular compositional kernel called the NNGP kernel. More recently, it was shown that predictions resulting from these networks following Gradient Descent are Gaussian with a distribution that can be computed in closed form using the Neural Tangent Kernel. Neural Tangents provides a high level library to compute NNGP and NT kernels for a wide range of neural networks. See the paper for a more detailed description of the library itself.
Our goal will be to train an ensemble of neural networks on a simple synthetic task. We'll then compare the results of this ensemble with the prediction of the NTK theory. Finally, we'll play around with different neural network architectures to see how this affects the resulting kernel. However, Neural Tangents is built on JAX which may be new to you. To get warmed up with JAX, we'll start out by generating some data.
Warm Up: Creating a Dataset
We're going to build a widely used synthetic dataset that's used extensively in Pattern Recognition and Machine Learning. Incidentally, Pattern Recognition and Machine Learning is an outstanding book by Christopher Bishop that was recently released for free.
Our training data is going to be drawn from a process,
$$y = f(x) + \epsilon$$
where $f(x)$ is a deterministic function and $\epsilon\sim\mathcal N(0, \sigma)$ is Gaussian noise with some scale. We're going to choose $f(x) = \sin(x)$ with $x\sim\text{Uniform}(-\pi, \pi)$. Our testing data will be $y = f(x)$ for $x$ linearly spaced in $[-\pi, \pi]$. Feel free to try out different functions and domains!
Since we want to generate our data randomly, we'll need to generate random numbers. Unlike most random number generators that store a global random state, JAX makes the random state explicit (see the JAX documentation for more information). Let's therefore start by making some random state using a seed of 10.
End of explanation
train_points = 5
test_points = 50
noise_scale = 1e-1
target_fn = lambda x: np.sin(x)
Explanation: Now let's set up some constants that will define our dataset. In particular, we will use a small training set of 5 points and 50 tests points. Finally, we'll define a noise scale and target function.
End of explanation
key, x_key, y_key = random.split(key, 3)
train_xs = random.uniform(x_key, (train_points, 1), minval=-np.pi, maxval=np.pi)
train_ys = target_fn(train_xs)
train_ys += noise_scale * random.normal(y_key, (train_points, 1))
train = (train_xs, train_ys)
Explanation: Next we generate our training data. We know that we will want to have randomly chosen $x$'s and noise. To generate random numbers in JAX, we have to explicitly evolve the random number state using random.split each time we draw a random number.
Then we'll want to generate the random inputs, apply the target function, and add the random noise.
End of explanation
test_xs = np.linspace(-np.pi, np.pi, test_points)
test_xs = np.reshape(test_xs, (test_points, 1))
test_ys = target_fn(test_xs)
test = (test_xs, test_ys)
Explanation: Finally, we want to generate our test data. The $x$'s will be linearly spaced with no noise. Note, we want the inputs to have shape (N, 1) instead of (N,) since we treat this as a model with one feature.
End of explanation
plot_fn(train, test)
legend(loc='upper left')
finalize_plot((0.85, 0.6))
Explanation: Having generated our data, let's plot it.
End of explanation
init_fn, apply_fn, kernel_fn = stax.serial(
stax.Dense(512, W_std=1.5, b_std=0.05), stax.Erf(),
stax.Dense(512, W_std=1.5, b_std=0.05), stax.Erf(),
stax.Dense(1, W_std=1.5, b_std=0.05)
)
apply_fn = jit(apply_fn)
kernel_fn = jit(kernel_fn, static_argnums=(2,))
Explanation: What a good looking dataset! Let's train a neural network on it.
Defining a Neural Network
The first thing we need to do is define a neural network. We'll start out with a simple fully-connected network using Erf nonlinearities. We describe our network using our neural network library that shares syntax and code with JAX's own library called stax. Layers in jax.example_libraries.stax are pairs of functions (init_fn, apply_fn) where init_fn(key, input_shape) draws parameters randomly and apply_fn(params, xs) computes outputs of the function for specific inputs. These layers can be composed using serial and parallel combinators to produce new (init_fn, apply_fn) pairs.
Similarly, layers in neural_tangents.stax are triplets of functions (init_fn, apply_fn, kernel_fn) where the first two functions are the same as their stax equivalent but the third function, kernel_fn, computes infinite-width GP kernels corresponding to the layer. Again these layers can be composed using serial and parallel to build kernels for complicated architectures. Fully-connected layers in neural_tangents.stax are created using the Dense layer which is defined by,
$$z^{l+1}i = \frac{\sigma_w}{\sqrt{N{in}}} \sum_j W_{ij} z^{l}i + \sigma_b b_i$$
where $W{ij}, b_i\sim\mathcal N(0,1)$ at initialization and $\sigma_w, \sigma_b$ sets the scales of the weights and biases respectively.
End of explanation
prior_draws = []
for _ in range(10):
key, net_key = random.split(key)
_, params = init_fn(net_key, (-1, 1))
prior_draws += [apply_fn(params, test_xs)]
plot_fn(train, test)
for p in prior_draws:
plt.plot(test_xs, p, linewidth=3, color=[1, 0.65, 0.65])
legend(['train', '$f(x)$', 'random draw'], loc='upper left')
finalize_plot((0.85, 0.6))
Explanation: Here the lines apply_fn = jit(apply_fn) and kernel_fn = jit(kernel_fn, static_argnums=(2,)) use a JAX feature that compiles functions so that they are executed as single calls to the GPU.
Next, let's take several random draws of the parameters of the network and plot what the functions look like.
End of explanation
kernel = kernel_fn(test_xs, test_xs, 'nngp')
std_dev = np.sqrt(np.diag(kernel))
plot_fn(train, test)
plt.fill_between(
np.reshape(test_xs, (-1,)), 2 * std_dev, -2 * std_dev, alpha=0.4)
for p in prior_draws:
plt.plot(test_xs, p, linewidth=3, color=[1, 0.65, 0.65])
finalize_plot((0.85, 0.6))
Explanation: Next we can look at the exact prior over functions in the infinite-width limit using the kernel_fn. The kernel function has the signature kernel = kernel_fn(x_1, x_2) which computes the kernel between two sets of inputs x_1 and x_2. The kernel_fn can compute two different kernels: the NNGP kernel which describes the Bayesian infinite network and the NT kernel which describes how this network evolves under gradient descent. We would like to visualize the standard deviation of the infinite-width function distribution at each $x$. This is given by the diagonal of the NNGP kernel. We compute this now and then plot it compared with the draws above.
End of explanation
predict_fn = nt.predict.gradient_descent_mse_ensemble(kernel_fn, train_xs,
train_ys, diag_reg=1e-4)
nngp_mean, nngp_covariance = predict_fn(x_test=test_xs, get='nngp',
compute_cov=True)
nngp_mean = np.reshape(nngp_mean, (-1,))
nngp_std = np.sqrt(np.diag(nngp_covariance))
plot_fn(train, test)
plt.plot(test_xs, nngp_mean, 'r-', linewidth=3)
plt.fill_between(
np.reshape(test_xs, (-1)),
nngp_mean - 2 * nngp_std,
nngp_mean + 2 * nngp_std,
color='red', alpha=0.2)
plt.xlim([-np.pi, np.pi])
plt.ylim([-1.5, 1.5])
legend(['Train', 'f(x)', 'Bayesian Inference'], loc='upper left')
finalize_plot((0.85, 0.6))
Explanation: Infinite Width Inference
We can use the infinite-width GP defined above to perform exact Bayesian inference using the infinite width network. To do this, we will use the function neural_tangents.predict.gradient_descent_mse_ensemble that performs this inference exactly. predict_fn = nt.predict.gradient_descent_mse_ensemble(kernel_fn, train_xs, train_ys); mean, cov = predict_fn(x_test=test_xs, get='ntk', compute_cov=True) computes the mean and covariance of the network evaluated on the test points after training. This predict_fn function includes two different modes: in "NNGP" mode we compute the Bayesian posterior (which is equivalent to gradient descent with all but the last-layer weights frozen), in "NTK" mode we compute the distribution of networks after gradient descent training.
We want to do exact Bayesian inference so we'll start off using the "NNGP" setting. We will compute and plot these predictions now; we will be concerned with the standard deviation of the predictions on test points which will be given by the diagonal of the covariance matrix.
End of explanation
ntk_mean, ntk_covariance = predict_fn(x_test=test_xs, get='ntk',
compute_cov=True)
ntk_mean = np.reshape(ntk_mean, (-1,))
ntk_std = np.sqrt(np.diag(ntk_covariance))
plot_fn(train, test)
plt.plot(test_xs, nngp_mean, 'r-', linewidth=3)
plt.fill_between(
np.reshape(test_xs, (-1)),
nngp_mean - 2 * nngp_std,
nngp_mean + 2 * nngp_std,
color='red', alpha=0.2)
plt.plot(test_xs, ntk_mean, 'b-', linewidth=3)
plt.fill_between(
np.reshape(test_xs, (-1)),
ntk_mean - 2 * ntk_std,
ntk_mean + 2 * ntk_std,
color='blue', alpha=0.2)
plt.xlim([-np.pi, np.pi])
plt.ylim([-1.5, 1.5])
legend(['Train', 'f(x)', 'Bayesian Inference', 'Gradient Descent'],
loc='upper left')
finalize_plot((0.85, 0.6))
Explanation: We see that our posterior exactly fits all of the training points as expected. We also see that the there is significant uncertainty in the predictions between the points in the middle.
Next, we would like to compute the result of doing gradient descent on our infinite network for an infinite amount of time. To do this, we will use the "NTK" inference mode. Note that otherwise the call to predict_fn looks identical. We will compare the result of true Bayesian inference with gradient descent.
End of explanation
ts = np.arange(0, 10 ** 3, 10 ** -1)
ntk_train_loss_mean = loss_fn(predict_fn, train_ys, ts)
ntk_test_loss_mean = loss_fn(predict_fn, test_ys, ts, test_xs)
plt.subplot(1, 2, 1)
plt.loglog(ts, ntk_train_loss_mean, linewidth=3)
plt.loglog(ts, ntk_test_loss_mean, linewidth=3)
format_plot('Step', 'Loss')
legend(['Infinite Train', 'Infinite Test'])
plt.subplot(1, 2, 2)
plot_fn(train, None)
plt.plot(test_xs, ntk_mean, 'b-', linewidth=3)
plt.fill_between(
np.reshape(test_xs, (-1)),
ntk_mean - 2 * ntk_std,
ntk_mean + 2 * ntk_std,
color='blue', alpha=0.2)
legend(
['Train', 'Infinite Network'],
loc='upper left')
finalize_plot((1.5, 0.6))
Explanation: We see that while the result of gradient descent and bayesian inference are similar, they are not identical.
Not only can we do inference at infinite times, but we can also perform finite-time inference. We will use this to predict the mean of the train and test losses over the course of training. To compute the mean MSE loss, we need to access the mean and variance of our networks predictions as a function of time. To do this, we call our predict_fn function with finite times t (as opposed to using the default value t=None earlier, considered as infinite time). Note that predict can act on both scalar and array values, so we simply invoke the function.
End of explanation
learning_rate = 0.1
training_steps = 10000
opt_init, opt_update, get_params = optimizers.sgd(learning_rate)
opt_update = jit(opt_update)
Explanation: Training a Neural Network
We will now compare the results of gradient descent GP-inference computed above to the result of training an ensemble of finite width neural networks. We first train a single network drawn from the prior and then we will show how to generalize this to an ensemble. To do this we use JAX's gradient descent optimizer. Optimizers are described by a triple of functions (init_fn, update_fn, get_params). Here, init_fn(params) takes an initial set of parameters and returns an optimizer state that can include extra information (like the velocity in the momentum optimizer). opt_update(step, grads, state) takes a new state and updates it using gradients. Finally, get_params(state) returns the parameters for a given state.
End of explanation
loss = jit(lambda params, x, y: 0.5 * np.mean((apply_fn(params, x) - y) ** 2))
grad_loss = jit(lambda state, x, y: grad(loss)(get_params(state), x, y))
Explanation: Next we need to define a loss and a gradient of the loss. We'll use an MSE loss. The function grad is another JAX function that takes a function and returns a new function that computes its gradient.
End of explanation
train_losses = []
test_losses = []
opt_state = opt_init(params)
for i in range(training_steps):
opt_state = opt_update(i, grad_loss(opt_state, *train), opt_state)
train_losses += [loss(get_params(opt_state), *train)]
test_losses += [loss(get_params(opt_state), *test)]
Explanation: Now we want to actually train the network. To do this we just initialize the optimizer state and then update it however many times we want. We'll record the train and test loss after each step.
End of explanation
plt.subplot(1, 2, 1)
plt.loglog(ts, ntk_train_loss_mean, linewidth=3)
plt.loglog(ts, ntk_test_loss_mean, linewidth=3)
plt.loglog(ts, train_losses, 'k-', linewidth=2)
plt.loglog(ts, test_losses, 'k-', linewidth=2)
format_plot('Step', 'Loss')
legend(['Infinite Train', 'Infinite Test', 'Finite'])
plt.subplot(1, 2, 2)
plot_fn(train, None)
plt.plot(test_xs, ntk_mean, 'b-', linewidth=3)
plt.fill_between(
np.reshape(test_xs, (-1)),
ntk_mean - 2 * ntk_std,
ntk_mean + 2 * ntk_std,
color='blue', alpha=0.2)
plt.plot(test_xs, apply_fn(get_params(opt_state), test_xs), 'k-', linewidth=2)
legend(
['Train', 'Infinite Network', 'Finite Network'],
loc='upper left')
finalize_plot((1.5, 0.6))
Explanation: Finally, lets plot the loss over the course of training and the function after training compared with our GP inference.
End of explanation
def train_network(key):
train_losses = []
test_losses = []
_, params = init_fn(key, (-1, 1))
opt_state = opt_init(params)
for i in range(training_steps):
train_losses += [np.reshape(loss(get_params(opt_state), *train), (1,))]
test_losses += [np.reshape(loss(get_params(opt_state), *test), (1,))]
opt_state = opt_update(i, grad_loss(opt_state, *train), opt_state)
train_losses = np.concatenate(train_losses)
test_losses = np.concatenate(test_losses)
return get_params(opt_state), train_losses, test_losses
Explanation: Training an Ensemble of Neural Networks
The draw above certainly seems consistent with exact inference. However, as discussed above to make a more quantitative comparison we want to train an ensemble of finite-width networks. We could use a for-loop to loop over all the different instantiations that we wanted to evaluate. However, it is more convenient and efficient to use another JAX feature called vmap. vmap takes a function and vectorizes it over a batch dimension. In this case, we're going to add a batch dimension to our training loop so that we train a whole batch of neural networks at once. To do that, let's first wrap our whole training loop in a function. The function will take a random state and train a network based on that random state.
End of explanation
#@test {"skip": true}
params, train_loss, test_loss = train_network(key)
#@test {"skip": true}
plt.subplot(1, 2, 1)
plt.loglog(ts, ntk_train_loss_mean, linewidth=3)
plt.loglog(ts, ntk_test_loss_mean, linewidth=3)
plt.loglog(ts, train_loss, 'k-', linewidth=2)
plt.loglog(ts, test_loss, 'k-', linewidth=2)
format_plot('Step', 'Loss')
legend(['Train', 'Test', 'Finite'])
plt.subplot(1, 2, 2)
plot_fn(train, None)
plt.plot(test_xs, ntk_mean, 'b-', linewidth=3)
plt.fill_between(
np.reshape(test_xs, (-1)),
ntk_mean - 2 * ntk_std,
ntk_mean + 2 * ntk_std,
color='blue', alpha=0.2)
plt.plot(test_xs, apply_fn(params, test_xs), 'k-', linewidth=2)
legend(['Train', 'Infinite Network', 'Finite Network'], loc='upper left')
finalize_plot((1.5, 0.6))
Explanation: We can test it to make sure that we get a trained network.
End of explanation
#@test {"skip": true}
ensemble_size = 100
ensemble_key = random.split(key, ensemble_size)
params, train_loss, test_loss = vmap(train_network)(ensemble_key)
Explanation: Now, to train an ensemble we just have to apply vmap to train_network. The resulting function will take a vector of random states and will train one network for each random state in the vector.
End of explanation
#@test {"skip": true}
plt.subplot(1, 2, 1)
mean_train_loss = np.mean(train_loss, axis=0)
mean_test_loss = np.mean(test_loss, axis=0)
plt.loglog(ts, ntk_train_loss_mean, linewidth=3)
plt.loglog(ts, ntk_test_loss_mean, linewidth=3)
plt.loglog(ts, mean_train_loss, 'k-', linewidth=2)
plt.loglog(ts, mean_test_loss, 'k-', linewidth=2)
plt.xlim([10 ** 0, 10 ** 3])
plt.xscale('log')
plt.yscale('log')
format_plot('Step', 'Loss')
legend(['Infinite Train', 'Infinite Test', 'Finite Ensemble'])
plt.subplot(1, 2, 2)
plot_fn(train, None)
plt.plot(test_xs, ntk_mean, 'b-', linewidth=3)
plt.fill_between(
np.reshape(test_xs, (-1)),
ntk_mean - 2 * ntk_std,
ntk_mean + 2 * ntk_std,
color='blue', alpha=0.2)
ensemble_fx = vmap(apply_fn, (0, None))(params, test_xs)
mean_fx = np.reshape(np.mean(ensemble_fx, axis=0), (-1,))
std_fx = np.reshape(np.std(ensemble_fx, axis=0), (-1,))
plt.plot(test_xs, mean_fx - 2 * std_fx, 'k--', label='_nolegend_')
plt.plot(test_xs, mean_fx + 2 * std_fx, 'k--', label='_nolegend_')
plt.plot(test_xs, mean_fx, linewidth=2, color='black')
legend(['Train', 'Infinite Network', 'Finite Ensemble'], loc='upper left')
plt.xlim([-np.pi, np.pi])
plt.ylim([-1.5, 1.5])
format_plot('$x$', '$f$')
finalize_plot((1.5, 0.6))
Explanation: Let's plot the empirical standard deviation in the loss over the course of training as well as well as for the function after gradient descent compared with the exact inference above.
End of explanation
ResBlock = stax.serial(
stax.FanOut(2),
stax.parallel(
stax.serial(
stax.Erf(),
stax.Dense(512, W_std=1.1, b_std=0),
),
stax.Identity()
),
stax.FanInSum()
)
init_fn, apply_fn, kernel_fn = stax.serial(
stax.Dense(512, W_std=1, b_std=0),
ResBlock, ResBlock, stax.Erf(),
stax.Dense(1, W_std=1.5, b_std=0)
)
apply_fn = jit(apply_fn)
kernel_fn = jit(kernel_fn, static_argnums=(2,))
Explanation: We see pretty nice agreement between exact inference of the infinite-width networks and the result of training an ensemble! Note that we do see some deviations in the training loss at the end of training. This is ameliorated by using a wider network.
Playing Around with the Architecture
To demonstrate the ease of specifying more exotic architecture, can try to reproduce the above results with different choices of architecture. For fn, let's see what happens if we add residual connections.
End of explanation
#@test {"skip": true}
ensemble_size = 100
learning_rate = 0.1
ts = np.arange(0, 10 ** 3, learning_rate)
opt_init, opt_update, get_params = optimizers.sgd(learning_rate)
opt_update = jit(opt_update)
key, = random.split(key, 1)
ensemble_key = random.split(key, ensemble_size)
params, train_loss, test_loss = vmap(train_network)(ensemble_key)
Explanation: Given this new architecture, let's train a new ensemble of models.
End of explanation
#@test {"skip": true}
predict_fn = nt.predict.gradient_descent_mse_ensemble(kernel_fn, train_xs,
train_ys, diag_reg=1e-4)
ntk_mean, ntk_var = predict_fn(x_test=test_xs, get='ntk', compute_cov=True)
ntk_mean = np.reshape(ntk_mean, (-1,))
ntk_std = np.sqrt(np.diag(ntk_var))
ntk_train_loss_mean = loss_fn(predict_fn, train_ys, ts)
ntk_test_loss_mean = loss_fn(predict_fn, test_ys, ts, test_xs)
Explanation: Finally, let's repeat our NTK-GP inference
End of explanation
#@test {"skip": true}
plt.subplot(1, 2, 1)
mean_train_loss = np.mean(train_loss, axis=0)
mean_test_loss = np.mean(test_loss, axis=0)
plt.loglog(ts, ntk_train_loss_mean, linewidth=3)
plt.loglog(ts, ntk_test_loss_mean, linewidth=3)
plt.loglog(ts, mean_train_loss, 'k-', linewidth=2)
plt.loglog(ts, mean_test_loss, 'k-', linewidth=2)
plt.xlim([10 ** 0, 10 ** 3])
plt.xscale('log')
plt.yscale('log')
format_plot('Step', 'Loss')
legend(['Infinite Train', 'Infinite Test', 'Finite Ensemble'])
plt.subplot(1, 2, 2)
plot_fn(train, None)
plt.plot(test_xs, ntk_mean, 'b-', linewidth=3)
plt.fill_between(
np.reshape(test_xs, (-1)),
ntk_mean - 2 * ntk_std,
ntk_mean + 2 * ntk_std,
color='blue', alpha=0.2)
ensemble_fx = vmap(apply_fn, (0, None))(params, test_xs)
mean_fx = np.reshape(np.mean(ensemble_fx, axis=0), (-1,))
std_fx = np.reshape(np.std(ensemble_fx, axis=0), (-1,))
plt.plot(test_xs, mean_fx - 2 * std_fx, 'k--', label='_nolegend_')
plt.plot(test_xs, mean_fx + 2 * std_fx, 'k--', label='_nolegend_')
plt.plot(test_xs, mean_fx, linewidth=2, color='black')
legend(['Train', 'Infinite Network', 'Finite Ensemble'], loc='upper left')
plt.xlim([-np.pi, np.pi])
plt.ylim([-1.5, 1.5])
format_plot('$x$', '$f$')
finalize_plot((1.5, 0.6))
Explanation: Now let's draw the result!
End of explanation |
533 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression
Copyright 2015 Allen Downey
License
Step1: Let's load up the NSFG data again.
Step2: And select live, full-term births.
Step3: And drop rows with missing data (just for the variables we want).
Step4: Check a few rows
Step5: And summarize a few variables.
Step6: Here's a scatterplot of age and birthweight, with parameters tuned to avoid saturation.
Step7: Mean of mother's age
Step8: Mean and standard deviation of birthweight
Step9: And the coefficient of correlation
Step10: The Pandas corr function gets the same result
Step11: To see the relationship more clearly, we can group mother's age into 3-year bins and plot percentiles of birth weight for each bin.
Step13: The first and last points are not very reliable, because they represent fewer data points.
It looks like there is a generally positive relationshop between birth weight and mother's age, possibly leveling or dropping for older mothers.
We can get more information about the mothers by reading the respondents file, which contains one row per respondent.
Step14: There are 7643 respondents and 3087 variables about each.
Step15: If we use the caseid variable as the index, we can look up respondents efficiently by id.
Here's what the first few rows look like
Step16: Now we can join the tables, using the caseid from each pregnancy record to find the corresponding respondent and (abstractly) copy over the additional variables.
So the joined table has one row for each pregnancy and all the columns from both tables.
Step17: The encoding for screentime is a colon-separated timestamp.
Step18: If we convert to a datetime object, we avoid some processing problems later.
Step19: To estimate the effect of mother's age on birthweight, we can use a simple least squares fit.
Step20: The slope is almost 3 ounces per decade.
We can do the same thing using Ordinary Least Squares from statsmodels
Step21: The results object contains the parameters (and all the other info in the table)
Step22: And the results are consistent with my implementation
Step23: We can use a boolean variable as a predictor
Step24: First babies are lighter by about 1.5 ounces.
Step25: And we can make a model with multiple predictors.
Step26: If we control for mother's age, the difference in weight for first babies is cut to about 0.5 ounces (and no longer statistically significant).
Step27: The relationship with age might be non-linear. Adding a quadratic term helps a little, although note that the $R^2$ values for all of these models are very small.
Step28: Now we can combine the quadratic age model with isfirst
Step29: Now the effect is cut to less that a third of an ounce, and very plausibly due to chance.
Step30: Here's the best model I found, combining all variables that seemed plausibly predictive.
Step31: All predictors are statistically significant, so the effects could be legit, but the $R^2$ value is still very small
Step32: The estimated parameter is 0.0016, which is small and not statistically significant. So the apparent relationship might be due to chance.
But for the sake of the example, I'll take it at face value and work out the effect on the prediction.
A parameter in a logistic regression is a log odds ratio, so we can compute the odds ratio for a difference of 10 years in mother's age
Step33: And we can use the odds ratio to update a prior probability. A mother at the mean age has a 51% chance of having a boy.
In the case a mother who is 10 years older has a 51.4% chance.
Step34: I searched for other factors that might be predictive. The most likely candidates turn out not to be statistically significant.
Step35: Again, taking these parameters at face values, we can use the model to make predictions.
The baseline strategy is to always guess boy, which yields accuracy of 50.8%
Step36: results.predict uses the model to generate predictions for the data.
Adding up the correct positive and negative predictions, we get accuracy 51.3%
Step37: And we can use the model to generate a prediction for the office pool.
Suppose your hypothetical coworker is is 39 years old and white, her husband is 30, and they are expecting their first child. | Python Code:
from __future__ import print_function, division
import numpy as np
import pandas as pd
import first
import thinkstats2
import thinkplot
%matplotlib inline
Explanation: Regression
Copyright 2015 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
live, firsts, others = first.MakeFrames()
live.shape
Explanation: Let's load up the NSFG data again.
End of explanation
live = live[live.prglngth>=37]
live.shape
Explanation: And select live, full-term births.
End of explanation
live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
live.shape
Explanation: And drop rows with missing data (just for the variables we want).
End of explanation
live.head()
Explanation: Check a few rows:
End of explanation
live[['agepreg', 'totalwgt_lb']].describe()
Explanation: And summarize a few variables.
End of explanation
ages = live.agepreg
weights = live.totalwgt_lb
thinkplot.Scatter(ages, weights, alpha=0.1, s=15)
thinkplot.Config(xlabel='age (years)',
ylabel='weight (lbs)',
xlim=[10, 45],
ylim=[0, 15],
legend=False)
Explanation: Here's a scatterplot of age and birthweight, with parameters tuned to avoid saturation.
End of explanation
live['agepreg'].mean()
Explanation: Mean of mother's age:
End of explanation
live['totalwgt_lb'].mean(), live['totalwgt_lb'].std()
Explanation: Mean and standard deviation of birthweight:
End of explanation
thinkstats2.Corr(ages, weights)
Explanation: And the coefficient of correlation:
End of explanation
live['totalwgt_lb'].corr(live['agepreg'])
Explanation: The Pandas corr function gets the same result:
End of explanation
bins = np.arange(10, 48, 3)
indices = np.digitize(live.agepreg, bins)
groups = live.groupby(indices)
ages = [group.agepreg.mean() for i, group in groups][1:-1]
cdfs = [thinkstats2.Cdf(group.totalwgt_lb) for i, group in groups][1:-1]
thinkplot.PrePlot(5)
for percent in [90, 75, 50, 25, 10]:
weights = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(ages, weights, label=label)
thinkplot.Config(xlabel="mother's age (years)",
ylabel='birth weight (lbs)',
xlim=[14, 50],
legend=True)
Explanation: To see the relationship more clearly, we can group mother's age into 3-year bins and plot percentiles of birth weight for each bin.
End of explanation
def ReadFemResp(dct_file='2002FemResp.dct',
dat_file='2002FemResp.dat.gz',
nrows=None):
Reads the NSFG respondent data.
dct_file: string file name
dat_file: string file name
returns: DataFrame
dct = thinkstats2.ReadStataDct(dct_file)
df = dct.ReadFixedWidth(dat_file, compression='gzip', nrows=nrows)
return df
Explanation: The first and last points are not very reliable, because they represent fewer data points.
It looks like there is a generally positive relationshop between birth weight and mother's age, possibly leveling or dropping for older mothers.
We can get more information about the mothers by reading the respondents file, which contains one row per respondent.
End of explanation
resp = ReadFemResp()
resp.shape
Explanation: There are 7643 respondents and 3087 variables about each.
End of explanation
resp.index = resp.caseid
resp.head()
Explanation: If we use the caseid variable as the index, we can look up respondents efficiently by id.
Here's what the first few rows look like:
End of explanation
join = live.join(resp, on='caseid', rsuffix='_r')
join.shape
Explanation: Now we can join the tables, using the caseid from each pregnancy record to find the corresponding respondent and (abstractly) copy over the additional variables.
So the joined table has one row for each pregnancy and all the columns from both tables.
End of explanation
join.screentime.head()
Explanation: The encoding for screentime is a colon-separated timestamp.
End of explanation
join.screentime = pd.to_datetime(join.screentime)
join.screentime.head()
Explanation: If we convert to a datetime object, we avoid some processing problems later.
End of explanation
ages = join.agepreg
weights = join.totalwgt_lb
inter, slope = thinkstats2.LeastSquares(ages, weights)
inter, slope, slope*16*10
Explanation: To estimate the effect of mother's age on birthweight, we can use a simple least squares fit.
End of explanation
import statsmodels.formula.api as smf
formula = ('totalwgt_lb ~ agepreg')
results = smf.ols(formula, data=join).fit()
results.summary()
Explanation: The slope is almost 3 ounces per decade.
We can do the same thing using Ordinary Least Squares from statsmodels:
End of explanation
inter, slope = results.params
inter, slope
Explanation: The results object contains the parameters (and all the other info in the table):
End of explanation
slope * 16 * 10 # slope in ounces per decade
Explanation: And the results are consistent with my implementation:
End of explanation
join['isfirst'] = (join.birthord == 1)
formula = 'totalwgt_lb ~ isfirst'
results = smf.ols(formula, data=join).fit()
results.summary()
Explanation: We can use a boolean variable as a predictor:
End of explanation
results.params['isfirst[T.True]'] * 16
Explanation: First babies are lighter by about 1.5 ounces.
End of explanation
formula = 'totalwgt_lb ~ agepreg + isfirst'
results = smf.ols(formula, data=join).fit()
results.summary()
Explanation: And we can make a model with multiple predictors.
End of explanation
results.params['isfirst[T.True]'] * 16
Explanation: If we control for mother's age, the difference in weight for first babies is cut to about 0.5 ounces (and no longer statistically significant).
End of explanation
join['age2'] = join.agepreg**2
formula = 'totalwgt_lb ~ agepreg + age2'
results = smf.ols(formula, data=join).fit()
results.summary()
Explanation: The relationship with age might be non-linear. Adding a quadratic term helps a little, although note that the $R^2$ values for all of these models are very small.
End of explanation
formula = 'totalwgt_lb ~ agepreg + age2 + isfirst'
results = smf.ols(formula, data=join).fit()
results.summary()
Explanation: Now we can combine the quadratic age model with isfirst
End of explanation
results.params['isfirst[T.True]'] * 16
Explanation: Now the effect is cut to less that a third of an ounce, and very plausibly due to chance.
End of explanation
formula = ('totalwgt_lb ~ agepreg + age2 + C(race) + '
'nbrnaliv>1 + paydu==1 + totincr')
results = smf.ols(formula, data=join).fit()
results.summary()
Explanation: Here's the best model I found, combining all variables that seemed plausibly predictive.
End of explanation
live['isboy'] = (live.babysex==1).astype(int)
model = smf.logit('isboy ~ agepreg', data=live)
results = model.fit()
results.summary()
Explanation: All predictors are statistically significant, so the effects could be legit, but the $R^2$ value is still very small: this model doesn't provide much help for the office pool.
Logistic regression
Let's say we want to predict the sex of a baby based on information about the mother.
I'll start by creating a binary dependent variable, isboy, and checking for dependence on mother's age:
End of explanation
log_odds_ratio = results.params['agepreg'] * 10
odds_ratio = np.exp(log_odds_ratio)
odds_ratio
Explanation: The estimated parameter is 0.0016, which is small and not statistically significant. So the apparent relationship might be due to chance.
But for the sake of the example, I'll take it at face value and work out the effect on the prediction.
A parameter in a logistic regression is a log odds ratio, so we can compute the odds ratio for a difference of 10 years in mother's age:
End of explanation
p = 0.51
prior_odds = p / (1-p)
post_odds = prior_odds * odds_ratio
p = post_odds / (post_odds + 1)
p
Explanation: And we can use the odds ratio to update a prior probability. A mother at the mean age has a 51% chance of having a boy.
In the case a mother who is 10 years older has a 51.4% chance.
End of explanation
formula = 'isboy ~ agepreg + hpagelb + birthord + C(race)'
model = smf.logit(formula, data=live)
results = model.fit()
results.summary()
Explanation: I searched for other factors that might be predictive. The most likely candidates turn out not to be statistically significant.
End of explanation
exog = pd.DataFrame(model.exog, columns=model.exog_names)
endog = pd.DataFrame(model.endog, columns=[model.endog_names])
actual = endog['isboy']
baseline = actual.mean()
baseline
Explanation: Again, taking these parameters at face values, we can use the model to make predictions.
The baseline strategy is to always guess boy, which yields accuracy of 50.8%
End of explanation
predict = (results.predict() >= 0.5)
true_pos = predict * actual
true_neg = (1 - predict) * (1 - actual)
acc = (sum(true_pos) + sum(true_neg)) / len(actual)
acc
Explanation: results.predict uses the model to generate predictions for the data.
Adding up the correct positive and negative predictions, we get accuracy 51.3%
End of explanation
columns = ['agepreg', 'hpagelb', 'birthord', 'race']
new = pd.DataFrame([[39, 30, 1, 2]], columns=columns)
y = results.predict(new)
y
Explanation: And we can use the model to generate a prediction for the office pool.
Suppose your hypothetical coworker is is 39 years old and white, her husband is 30, and they are expecting their first child.
End of explanation |
534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Keys
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right
Step2: Example
In the following example, we create a pipeline with a PCollection of key-value pairs.
Then, we apply Keys to extract the keys and discard the values. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/documentation/transforms/python/elementwise/keys-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/keys"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
!pip install --quiet -U apache-beam
Explanation: Keys
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.util.html#apache_beam.transforms.util.Keys"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a>
</td>
</table>
<br/><br/><br/>
Takes a collection of key-value pairs and returns the key of each element.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
icons = (
pipeline
| 'Garden plants' >> beam.Create([
('🍓', 'Strawberry'),
('🥕', 'Carrot'),
('🍆', 'Eggplant'),
('🍅', 'Tomato'),
('🥔', 'Potato'),
])
| 'Keys' >> beam.Keys()
| beam.Map(print))
Explanation: Example
In the following example, we create a pipeline with a PCollection of key-value pairs.
Then, we apply Keys to extract the keys and discard the values.
End of explanation |
535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Poincare Map
This example shows how to calculate a simple Poincare Map with REBOUND. A Poincare Map (or sometimes calles Poincare Section) can be helpful to understand dynamical systems.
Step1: We first create the initial conditions for our map. The most interesting Poincare maps exist near resonance, so we have to find a system near a resonance. The easiest way to get planets into resonance is migration. So that's what we'll do. Initially we setup a simulation in which the planets are placed just outside the 2
Step2: We then define a simple migration force that will act on the outer planet. We implement it in python. This is relatively slow, but we only need to migrate the planet for a short time.
Step3: Next, we link the additional migration forces to our REBOUND simulation and get the pointer to the particle array.
Step4: Then, we just integrate the system for 3000 time units, about 500 years in units where $G=1$.
Step5: Then we save the simulation to a binary file. We'll be reusing it a lot later to create the initial conditions and it is faster to load it from file than to migrate the planets into resonance each time.
Step6: To create the poincare map, we first define which hyper surface we want to look at. Here, we choose the pericenter of the outer planet.
Step7: We will also need a helper function that ensures our resonant angle is in the range $[-\pi
Step8: The following function generate the Poincare Map for one set of initial conditions.
We first load the resonant system from the binary file we created earlier.
We then randomly perturb the velocity of one of the particles. If we perturb the velocity enough, the planets will not be in resonant anymore.
We also initialize shadow particles to calculate the MEGNO, a fast chaos indicator.
Step9: For this example we'll run 10 initial conditions. Some of them will be in resonance, some other won't be. We run them in parallel using the InterruptiblePool that comes with REBOUND.
Step10: Now we can finally plot the Poincare Map. We color the points by the MEGNO value of the particular simulation. A value close to 2 corresponds to quasi-periodic orbits, a large value indicate chaotic motion. | Python Code:
import rebound
import numpy as np
Explanation: Poincare Map
This example shows how to calculate a simple Poincare Map with REBOUND. A Poincare Map (or sometimes calles Poincare Section) can be helpful to understand dynamical systems.
End of explanation
sim = rebound.Simulation()
sim.add(m=1.)
sim.add(m=1e-3,a=1,e=0.001)
sim.add(m=0.,a=1.65)
sim.move_to_com()
Explanation: We first create the initial conditions for our map. The most interesting Poincare maps exist near resonance, so we have to find a system near a resonance. The easiest way to get planets into resonance is migration. So that's what we'll do. Initially we setup a simulation in which the planets are placed just outside the 2:1 mean motion resonance.
End of explanation
def migrationForce(reb_sim):
tau = 40000.
ps[2].ax -= ps[2].vx/tau
ps[2].ay -= ps[2].vy/tau
ps[2].az -= ps[2].vz/tau
Explanation: We then define a simple migration force that will act on the outer planet. We implement it in python. This is relatively slow, but we only need to migrate the planet for a short time.
End of explanation
sim.additional_forces = migrationForce
ps = sim.particles
Explanation: Next, we link the additional migration forces to our REBOUND simulation and get the pointer to the particle array.
End of explanation
sim.integrate(3000.)
Explanation: Then, we just integrate the system for 3000 time units, about 500 years in units where $G=1$.
End of explanation
sim.save("resonant_system.bin")
Explanation: Then we save the simulation to a binary file. We'll be reusing it a lot later to create the initial conditions and it is faster to load it from file than to migrate the planets into resonance each time.
End of explanation
def hyper(sim):
ps = sim.particles
dx = ps[2].x -ps[0].x
dy = ps[2].y -ps[0].y
dvx = ps[2].vx-ps[0].vx
dvy = ps[2].vy-ps[0].vy
return dx*dvx + dy*dvy
Explanation: To create the poincare map, we first define which hyper surface we want to look at. Here, we choose the pericenter of the outer planet.
End of explanation
def mod2pi(x):
if x>np.pi:
return mod2pi(x-2.*np.pi)
if x<-np.pi:
return mod2pi(x+2.*np.pi)
return x
Explanation: We will also need a helper function that ensures our resonant angle is in the range $[-\pi:\pi]$.
End of explanation
def runone(args):
i = args # integer numbering the run
N_points_max = 2000 # maximum number of point in our Poincare Section
N_points = 0
poincare_map = np.zeros((N_points_max,2))
# setting up simulation from binary file
sim = rebound.Simulation.from_file("resonant_system.bin")
vx = 0.97+0.06*(float(i)/float(Nsim))
sim.particles[2].vx *= vx
sim.t = 0. # reset time to 0
sim.init_megno(1e-16) # add variational (shadow) particles and calculate MEGNO
# Integrate simulation in small intervals
# After each interval check if we crossed the
# hypersurface. If so, bisect until we hit the
# hypersurface exactly up to a precision
# of dt_epsilon
dt = 0.13
dt_epsilon = 0.001
sign = hyper(sim)
while sim.t<15000. and N_points < N_points_max:
oldt = sim.t
olddt = sim.dt
sim.integrate(oldt+dt)
nsign = hyper(sim)
if sign*nsign < 0.:
# Hyper surface crossed.
leftt = oldt
rightt = sim.t
sim.dt = -olddt
while (rightt-leftt > dt_epsilon):
# Bisection.
midt = (leftt+rightt)/2.
sim.integrate(midt, exact_finish_time=1)
msign = hyper(sim)
if msign*sign > 0.:
leftt = midt
sim.dt = 0.3*olddt
else:
rightt = midt
sim.dt = -0.3*olddt
# Hyper surface found up to precision of dt_epsilon.
# Calculate orbital elements
o = sim.calculate_orbits()
# Check if we cross hypersurface in one direction or the other.
if o[1].r<o[1].a:
# Calculate resonant angle phi and its time derivative
tp = np.pi*2.
phi = mod2pi(o[0].l-2.*o[1].l+o[1].omega+o[1].Omega)
phid = (tp/o[0].P-2.*tp/o[1].P)/(tp/o[0].P)
# Store value for map
poincare_map[N_points] = [phi,phid]
N_points += 1
sim.dt = olddt
sim.integrate(oldt+dt)
sign = nsign
return (poincare_map, sim.calculate_megno(),vx)
Explanation: The following function generate the Poincare Map for one set of initial conditions.
We first load the resonant system from the binary file we created earlier.
We then randomly perturb the velocity of one of the particles. If we perturb the velocity enough, the planets will not be in resonant anymore.
We also initialize shadow particles to calculate the MEGNO, a fast chaos indicator.
End of explanation
Nsim = 10
pool = rebound.InterruptiblePool()
res = pool.map(runone,range(Nsim))
Explanation: For this example we'll run 10 initial conditions. Some of them will be in resonance, some other won't be. We run them in parallel using the InterruptiblePool that comes with REBOUND.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(14,8))
ax = plt.subplot(111)
ax.set_xlabel("$\phi$"); ax.set_ylabel("$\dot{\phi}$")
ax.set_xlim([-np.pi,np.pi]); ax.set_ylim([-0.06,0.1])
cm = plt.cm.get_cmap('brg')
for m, megno, vx in res:
c = np.empty(len(m[:,0])); c.fill(megno)
p = ax.scatter(m[:,0],m[:,1],marker=".",c=c, vmin=1.4, vmax=3, s=25,edgecolor='none', cmap=cm)
cb = plt.colorbar(p, ax=ax)
cb.set_label("MEGNO $<Y>$")
Explanation: Now we can finally plot the Poincare Map. We color the points by the MEGNO value of the particular simulation. A value close to 2 corresponds to quasi-periodic orbits, a large value indicate chaotic motion.
End of explanation |
536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification with Support Vector Machines
by Soeren Sonnenburg | Saurabh Mahindre - <a href=\"https
Step1: Liblinear, a library for large- scale linear learning focusing on SVM, is used to do the classification. It supports different solver types.
Step2: We solve ${\bf w}\cdot{\bf x} + \text{b} = 0$ to visualise the separating hyperplane. The methods get_w() and get_bias() are used to get the necessary values.
Step3: The classifier is now applied on a X-Y grid of points to get predictions.
Step4: SVMs using kernels
If the data set is not linearly separable, a non-linear mapping $\Phi
Step5: Just for fun we compute the kernel matrix and display it. There are clusters visible that are smooth for the gaussian and polynomial kernel and block-wise for the linear one. The gaussian one also smoothly decays from some cluster centre while the polynomial one oscillates within the clusters.
Step6: Prediction using kernel based SVM
Now we train an SVM with a Gaussian Kernel. We use LibSVM but we could use any of the other SVM from Shogun. They all utilize the same kernel framework and so are drop-in replacements.
Step7: We could now check a number of properties like what the value of the objective function returned by the particular SVM learning algorithm or the explictly computed primal and dual objective function is
Step8: and based on the objectives we can compute the duality gap (have a look at reference [2]), a measure of convergence quality of the svm training algorithm . In theory it is 0 at the optimum and in reality at least close to 0.
Step9: Let's now apply on the X-Y grid data and plot the results.
Step10: Probabilistic Outputs
Calibrated probabilities can be generated in addition to class predictions using scores_to_probabilities() method of BinaryLabels, which implements the method described in [3]. This should only be used in conjunction with SVM. A parameteric form of a sigmoid function $$\frac{1}{{1+}exp(af(x) + b)}$$ is used to fit the outputs. Here $f(x)$ is the signed distance of a sample from the hyperplane, $a$ and $b$ are parameters to the sigmoid. This gives us the posterier probabilities $p(y=1|f(x))$.
Let's try this out on the above example. The familiar "S" shape of the sigmoid should be visible.
Step11: Soft margins and slack variables
If there is no clear classification possible using a hyperplane, we need to classify the data as nicely as possible while incorporating the misclassified samples. To do this a concept of soft margin is used. The method introduces non-negative slack variables, $\xi_i$, which measure the degree of misclassification of the data $x_i$.
$$
y_i(\mathbf{w}\cdot\mathbf{x_i} + b) \ge 1 - \xi_i \quad 1 \le i \le N $$
Introducing a linear penalty function leads to
$$\arg\min_{\mathbf{w},\mathbf{\xi}, b } ({\frac{1}{2} \|\mathbf{w}\|^2 +C \sum_{i=1}^n \xi_i) }$$
This in its dual form is leads to a slightly modified equation $\qquad(2)$.
\begin{eqnarray} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\ \mbox{s.t.} && 0\leq\alpha_i\leq C\ && \sum_{i=1}^{N} \alpha_i y_i=0 \ \end{eqnarray}
The result is that soft-margin SVM could choose decision boundary that has non-zero training error even if dataset is linearly separable but is less likely to overfit.
Here's an example using LibSVM on the above used data set. Highlighted points show support vectors. This should visually show the impact of C and how the amount of outliers on the wrong side of hyperplane is controlled using it.
Step12: You can see that lower value of C causes classifier to sacrifice linear separability in order to gain stability, in a sense that influence of any single datapoint is now bounded by C. For hard margin SVM, support vectors are the points which are "on the margin". In the picture above, C=1000 is pretty close to hard-margin SVM, and you can see the highlighted points are the ones that will touch the margin. In high dimensions this might lead to overfitting. For soft-margin SVM, with a lower value of C, it's easier to explain them in terms of dual (equation $(2)$) variables. Support vectors are datapoints from training set which are are included in the predictor, ie, the ones with non-zero $\alpha_i$ parameter. This includes margin errors and points on the margin of the hyperplane.
Binary classification using different kernels
Two-dimensional Gaussians are generated as data for this section.
$x_-\sim{\cal N_2}(0,1)-d$
$x_+\sim{\cal N_2}(0,1)+d$
and corresponding positive and negative labels. We create traindata and testdata with num of them being negatively and positively labelled in traindata,trainlab and testdata, testlab. For that we utilize Shogun's Gaussian Mixture Model class (GMM) from which we sample the data points and plot them.
Step13: Now lets plot the contour output on a $-5...+5$ grid for
The Support Vector Machines decision function $\mbox{sign}(f(x))$
The Support Vector Machines raw output $f(x)$
The Original Gaussian Mixture Model Distribution
Step14: And voila! The SVM decision rule reasonably distinguishes the red from the blue points. Despite being optimized for learning the discriminative function maximizing the margin, the SVM output quality wise remotely resembles the original distribution of the gaussian mixture model.
Let us visualise the output using different kernels.
Step15: Kernel Normalizers
Kernel normalizers post-process kernel values by carrying out normalization in feature space. Since kernel based SVMs use a non-linear mapping, in most cases any normalization in input space is lost in feature space. Kernel normalizers are a possible solution to this. Kernel Normalization is not strictly-speaking a form of preprocessing since it is not applied directly on the input vectors but can be seen as a kernel interpretation of the preprocessing. The KernelNormalizer class provides tools for kernel normalization. Some of the kernel normalizers in Shogun
Step16: Multiclass classification
Multiclass classification can be done using SVM by reducing the problem to binary classification. More on multiclass reductions in this notebook. CGMNPSVM class provides a built in one vs rest multiclass classification using GMNPlib. Let us see classification using it on four classes. CGMM class is used to sample the data.
Step17: Let us try the multiclass classification for different kernels. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import matplotlib.patches as patches
#To import all shogun classes
import shogun as sg
import numpy as np
#Generate some random data
X = 2 * np.random.randn(10,2)
traindata=np.r_[X + 3, X + 7].T
feats_train=sg.features(traindata)
trainlab=np.concatenate((np.ones(10),-np.ones(10)))
labels=sg.BinaryLabels(trainlab)
# Plot the training data
plt.figure(figsize=(6,6))
plt.gray()
_=plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.title("Training Data")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
p1 = patches.Rectangle((0, 0), 1, 1, fc="k")
p2 = patches.Rectangle((0, 0), 1, 1, fc="w")
plt.legend((p1, p2), ["Class 1", "Class 2"], loc=2)
plt.gray()
Explanation: Classification with Support Vector Machines
by Soeren Sonnenburg | Saurabh Mahindre - <a href=\"https://github.com/Saurabh7\">github.com/Saurabh7</a> as a part of <a href=\"http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616\">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann - <a href=\"https://github.com/karlnapf\">github.com/karlnapf</a> - <a href=\"http://herrstrathmann.de/\">herrstrathmann.de</a>
This notebook illustrates how to train a <a href="http://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machine</a> (SVM) <a href="http://en.wikipedia.org/wiki/Statistical_classification">classifier</a> using Shogun. The <a href="http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CLibSVM.html">CLibSVM</a> class of Shogun is used to do binary classification. Multiclass classification is also demonstrated using CGMNPSVM.
Introduction
Linear Support Vector Machines
Prediction using Linear SVM
SVMs using kernels
Kernels in Shogun
Prediction using kernel based SVM
Probabilistic Outputs using SVM
Soft margins and slack variables
Binary classification using different kernels
Kernel Normalizers
Multiclass classification using SVM
Introduction
Support Vector Machines (SVM's) are a learning method used for binary classification. The basic idea is to find a hyperplane which separates the data into its two classes. However, since example data is often not linearly separable, SVMs operate in a kernel induced feature space, i.e., data is embedded into a higher dimensional space where it is linearly separable.
Linear Support Vector Machines
In a supervised learning problem, we are given a labeled set of input-output pairs $\mathcal{D}=(x_i,y_i)^N_{i=1}\subseteq \mathcal{X} \times \mathcal{Y}$ where $x\in\mathcal{X}$ and $y\in{-1,+1}$. SVM is a binary classifier that tries to separate objects of different classes by finding a (hyper-)plane such that the margin between the two classes is maximized. A hyperplane in $\mathcal{R}^D$ can be parameterized by a vector $\bf{w}$ and a constant $\text b$ expressed in the equation:$${\bf w}\cdot{\bf x} + \text{b} = 0$$
Given such a hyperplane ($\bf w$,b) that separates the data, the discriminating function is: $$f(x) = \text {sign} ({\bf w}\cdot{\bf x} + {\text b})$$
If the training data are linearly separable, we can select two hyperplanes in a way that they separate the data and there are no points between them, and then try to maximize their distance. The region bounded by them is called "the margin". These hyperplanes can be described by the equations
$$({\bf w}\cdot{\bf x} + {\text b}) = 1$$
$$({\bf w}\cdot{\bf x} + {\text b}) = -1$$
the distance between these two hyperplanes is $\frac{2}{\|\mathbf{w}\|}$, so we want to minimize $\|\mathbf{w}\|$.
$$
\arg\min_{(\mathbf{w},b)}\frac{1}{2}\|\mathbf{w}\|^2 \qquad\qquad(1)$$
This gives us a hyperplane that maximizes the geometric distance to the closest data points.
As we also have to prevent data points from falling into the margin, we add the following constraint: for each ${i}$ either
$$({\bf w}\cdot{x}_i + {\text b}) \geq 1$$ or
$$({\bf w}\cdot{x}_i + {\text b}) \leq -1$$
which is similar to
$${y_i}({\bf w}\cdot{x}_i + {\text b}) \geq 1 \forall i$$
Lagrange multipliers are used to modify equation $(1)$ and the corresponding dual of the problem can be shown to be:
\begin{eqnarray}
\max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j {\bf x_i} \cdot {\bf x_j}\
\mbox{s.t.} && \alpha_i\geq 0\
&& \sum_{i}^{N} \alpha_i y_i=0\
\end{eqnarray}
From the derivation of these equations, it was seen that the optimal hyperplane can be written as:
$$\mathbf{w} = \sum_i \alpha_i y_i \mathbf{x}_i. $$
here most $\alpha_i$ turn out to be zero, which means that the solution is a sparse linear combination of the training data.
Prediction using Linear SVM
Now let us see how one can train a linear Support Vector Machine with Shogun. Two dimensional data (having 2 attributes say: attribute1 and attribute2) is now sampled to demonstrate the classification.
End of explanation
#prameters to svm
#parameter C is described in a later section.
C=1
epsilon=1e-3
svm=sg.machine('LibLinear', C1=C, C2=C, liblinear_solver_type='L2R_L2LOSS_SVC', epsilon=epsilon)
#train
svm.put('labels', labels)
svm.train(feats_train)
w=svm.get('w')
b=svm.get('bias')
Explanation: Liblinear, a library for large- scale linear learning focusing on SVM, is used to do the classification. It supports different solver types.
End of explanation
#solve for w.x+b=0
x1=np.linspace(-1.0, 11.0, 100)
def solve (x1):
return -( ( (w[0])*x1 + b )/w[1] )
x2=list(map(solve, x1))
#plot
plt.figure(figsize=(6,6))
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.plot(x1,x2, linewidth=2)
plt.title("Separating hyperplane")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
Explanation: We solve ${\bf w}\cdot{\bf x} + \text{b} = 0$ to visualise the separating hyperplane. The methods get_w() and get_bias() are used to get the necessary values.
End of explanation
size=100
x1_=np.linspace(-5, 15, size)
x2_=np.linspace(-5, 15, size)
x, y=np.meshgrid(x1_, x2_)
#Generate X-Y grid test data
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
#apply on test grid
predictions = svm.apply(grid)
#Distance from hyperplane
z=predictions.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
#Class predictions
z=predictions.get('labels').reshape((size, size))
#plot
plt.subplot(122)
plt.title("Separating hyperplane")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
Explanation: The classifier is now applied on a X-Y grid of points to get predictions.
End of explanation
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(100))
#Polynomial kernel of degree 2
poly_kernel=sg.kernel('PolyKernel', degree=2, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel=sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[linear_kernel, poly_kernel, gaussian_kernel]
Explanation: SVMs using kernels
If the data set is not linearly separable, a non-linear mapping $\Phi:{\bf x} \rightarrow \Phi({\bf x}) \in \mathcal{F} $ is used. This maps the data into a higher dimensional space where it is linearly separable. Our equation requires only the inner dot products ${\bf x_i}\cdot{\bf x_j}$. The equation can be defined in terms of inner products $\Phi({\bf x_i}) \cdot \Phi({\bf x_j})$ instead. Since $\Phi({\bf x_i})$ occurs only in dot products with $ \Phi({\bf x_j})$ it is sufficient to know the formula (kernel function) : $$K({\bf x_i, x_j} ) = \Phi({\bf x_i}) \cdot \Phi({\bf x_j})$$ without dealing with the maping directly. The transformed optimisation problem is:
\begin{eqnarray} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\ \mbox{s.t.} && \alpha_i\geq 0\ && \sum_{i=1}^{N} \alpha_i y_i=0 \qquad\qquad(2)\ \end{eqnarray}
Kernels in Shogun
Shogun provides many options for the above mentioned kernel functions. Kernel is the base class for kernels. Some commonly used kernels :
Gaussian kernel : Popular Gaussian kernel computed as $k({\bf x},{\bf x'})= exp(-\frac{||{\bf x}-{\bf x'}||^2}{\tau})$
Linear kernel : Computes $k({\bf x},{\bf x'})= {\bf x}\cdot {\bf x'}$
Polynomial kernel : Polynomial kernel computed as $k({\bf x},{\bf x'})= ({\bf x}\cdot {\bf x'}+c)^d$
Simgmoid Kernel : Computes $k({\bf x},{\bf x'})=\mbox{tanh}(\gamma {\bf x}\cdot{\bf x'}+c)$
Some of these kernels are initialised below.
End of explanation
plt.jet()
def display_km(kernels, svm):
plt.figure(figsize=(20,6))
plt.suptitle('Kernel matrices for different kernels', fontsize=12)
for i, kernel in enumerate(kernels):
kernel.init(feats_train,feats_train)
plt.subplot(1, len(kernels), i+1)
plt.title(kernel.get_name())
km=kernel.get_kernel_matrix()
plt.imshow(km, interpolation="nearest")
plt.colorbar()
display_km(kernels, svm)
Explanation: Just for fun we compute the kernel matrix and display it. There are clusters visible that are smooth for the gaussian and polynomial kernel and block-wise for the linear one. The gaussian one also smoothly decays from some cluster centre while the polynomial one oscillates within the clusters.
End of explanation
C=1
epsilon=1e-3
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=labels)
_=svm.train()
Explanation: Prediction using kernel based SVM
Now we train an SVM with a Gaussian Kernel. We use LibSVM but we could use any of the other SVM from Shogun. They all utilize the same kernel framework and so are drop-in replacements.
End of explanation
libsvm_obj = svm.get('objective')
primal_obj, dual_obj = sg.as_svm(svm).compute_svm_primal_objective(), sg.as_svm(svm).compute_svm_dual_objective()
print(libsvm_obj, primal_obj, dual_obj)
Explanation: We could now check a number of properties like what the value of the objective function returned by the particular SVM learning algorithm or the explictly computed primal and dual objective function is
End of explanation
print("duality_gap", dual_obj-primal_obj)
Explanation: and based on the objectives we can compute the duality gap (have a look at reference [2]), a measure of convergence quality of the svm training algorithm . In theory it is 0 at the optimum and in reality at least close to 0.
End of explanation
out=svm.apply(grid)
z=out.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
z=out.get('labels').reshape((size, size))
plt.subplot(122)
plt.title("Decision boundary")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
Explanation: Let's now apply on the X-Y grid data and plot the results.
End of explanation
n=10
x1t_=np.linspace(-5, 15, n)
x2t_=np.linspace(-5, 15, n)
xt, yt=np.meshgrid(x1t_, x2t_)
#Generate X-Y grid test data
test_grid=sg.features(np.array((np.ravel(xt), np.ravel(yt))))
labels_out=svm.apply(test_grid)
#Get values (Distance from hyperplane)
values=labels_out.get('current_values')
#Get probabilities
labels_out.scores_to_probabilities()
prob=labels_out.get('current_values')
#plot
plt.gray()
plt.figure(figsize=(10,6))
p1=plt.scatter(values, prob)
plt.title('Probabilistic outputs')
plt.xlabel('Distance from hyperplane')
plt.ylabel('Probability')
plt.legend([p1], ["Test samples"], loc=2)
Explanation: Probabilistic Outputs
Calibrated probabilities can be generated in addition to class predictions using scores_to_probabilities() method of BinaryLabels, which implements the method described in [3]. This should only be used in conjunction with SVM. A parameteric form of a sigmoid function $$\frac{1}{{1+}exp(af(x) + b)}$$ is used to fit the outputs. Here $f(x)$ is the signed distance of a sample from the hyperplane, $a$ and $b$ are parameters to the sigmoid. This gives us the posterier probabilities $p(y=1|f(x))$.
Let's try this out on the above example. The familiar "S" shape of the sigmoid should be visible.
End of explanation
def plot_sv(C_values):
plt.figure(figsize=(20,6))
plt.suptitle('Soft and hard margins with varying C', fontsize=12)
for i in range(len(C_values)):
plt.subplot(1, len(C_values), i+1)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
svm1 = sg.machine('LibSVM', C1=C_values[i], C2=C_values[i], kernel=linear_kernel, labels=labels)
svm1 = sg.as_svm(svm1)
svm1.train()
vec1=svm1.get_support_vectors()
X_=[]
Y_=[]
new_labels=[]
for j in vec1:
X_.append(traindata[0][j])
Y_.append(traindata[1][j])
new_labels.append(trainlab[j])
out1=svm1.apply(grid)
z1=out1.get_labels().reshape((size, size))
plt.jet()
c=plt.pcolor(x1_, x2_, z1)
plt.contour(x1_ , x2_, z1, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(X_, Y_, c=new_labels, s=150)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=20)
plt.title('Support vectors for C=%.2f'%C_values[i])
plt.xlabel('attribute1')
plt.ylabel('attribute2')
C_values=[0.1, 1000]
plot_sv(C_values)
Explanation: Soft margins and slack variables
If there is no clear classification possible using a hyperplane, we need to classify the data as nicely as possible while incorporating the misclassified samples. To do this a concept of soft margin is used. The method introduces non-negative slack variables, $\xi_i$, which measure the degree of misclassification of the data $x_i$.
$$
y_i(\mathbf{w}\cdot\mathbf{x_i} + b) \ge 1 - \xi_i \quad 1 \le i \le N $$
Introducing a linear penalty function leads to
$$\arg\min_{\mathbf{w},\mathbf{\xi}, b } ({\frac{1}{2} \|\mathbf{w}\|^2 +C \sum_{i=1}^n \xi_i) }$$
This in its dual form is leads to a slightly modified equation $\qquad(2)$.
\begin{eqnarray} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\ \mbox{s.t.} && 0\leq\alpha_i\leq C\ && \sum_{i=1}^{N} \alpha_i y_i=0 \ \end{eqnarray}
The result is that soft-margin SVM could choose decision boundary that has non-zero training error even if dataset is linearly separable but is less likely to overfit.
Here's an example using LibSVM on the above used data set. Highlighted points show support vectors. This should visually show the impact of C and how the amount of outliers on the wrong side of hyperplane is controlled using it.
End of explanation
num=50;
dist=1.0;
gmm=sg.GMM(2)
gmm.set_nth_mean(np.array([-dist,-dist]),0)
gmm.set_nth_mean(np.array([dist,dist]),1)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),0)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),1)
gmm.put('m_coefficients', np.array([1.0,0.0]))
xntr=np.array([gmm.sample() for i in range(num)]).T
gmm.set_coef(np.array([0.0,1.0]))
xptr=np.array([gmm.sample() for i in range(num)]).T
traindata=np.concatenate((xntr,xptr), axis=1)
trainlab=np.concatenate((-np.ones(num), np.ones(num)))
#shogun format features
feats_train=sg.features(traindata)
labels=sg.BinaryLabels(trainlab)
gaussian_kernel = sg.kernel("GaussianKernel", log_width=np.log(10))
#Polynomial kernel of degree 2
poly_kernel = sg.kernel('PolyKernel', degree=2, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel = sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
#train machine
C=1
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=labels)
_=svm.train(feats_train)
Explanation: You can see that lower value of C causes classifier to sacrifice linear separability in order to gain stability, in a sense that influence of any single datapoint is now bounded by C. For hard margin SVM, support vectors are the points which are "on the margin". In the picture above, C=1000 is pretty close to hard-margin SVM, and you can see the highlighted points are the ones that will touch the margin. In high dimensions this might lead to overfitting. For soft-margin SVM, with a lower value of C, it's easier to explain them in terms of dual (equation $(2)$) variables. Support vectors are datapoints from training set which are are included in the predictor, ie, the ones with non-zero $\alpha_i$ parameter. This includes margin errors and points on the margin of the hyperplane.
Binary classification using different kernels
Two-dimensional Gaussians are generated as data for this section.
$x_-\sim{\cal N_2}(0,1)-d$
$x_+\sim{\cal N_2}(0,1)+d$
and corresponding positive and negative labels. We create traindata and testdata with num of them being negatively and positively labelled in traindata,trainlab and testdata, testlab. For that we utilize Shogun's Gaussian Mixture Model class (GMM) from which we sample the data points and plot them.
End of explanation
size=100
x1=np.linspace(-5, 5, size)
x2=np.linspace(-5, 5, size)
x, y=np.meshgrid(x1, x2)
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
grid_out=svm.apply(grid)
z=grid_out.get('labels').reshape((size, size))
plt.jet()
plt.figure(figsize=(16,5))
z=grid_out.get_values().reshape((size, size))
plt.subplot(121)
plt.title('Classification')
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.subplot(122)
plt.title('Original distribution')
gmm.put('m_coefficients', np.array([1.0,0.0]))
gmm.set_features(grid)
grid_out=gmm.get_likelihood_for_all_examples()
zn=grid_out.reshape((size, size))
gmm.set_coef(np.array([0.0,1.0]))
grid_out=gmm.get_likelihood_for_all_examples()
zp=grid_out.reshape((size, size))
z=zp-zn
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
Explanation: Now lets plot the contour output on a $-5...+5$ grid for
The Support Vector Machines decision function $\mbox{sign}(f(x))$
The Support Vector Machines raw output $f(x)$
The Original Gaussian Mixture Model Distribution
End of explanation
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Binary Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.put('kernel', kernels[i])
svm.train()
grid_out=svm.apply(grid)
z=grid_out.get_values().reshape((size, size))
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
Explanation: And voila! The SVM decision rule reasonably distinguishes the red from the blue points. Despite being optimized for learning the discriminative function maximizing the margin, the SVM output quality wise remotely resembles the original distribution of the gaussian mixture model.
Let us visualise the output using different kernels.
End of explanation
f = open(os.path.join(SHOGUN_DATA_DIR, 'uci/ionosphere/ionosphere.data'))
mat = []
labels = []
# read data from file
for line in f:
words = line.rstrip().split(',')
mat.append([float(i) for i in words[0:-1]])
if str(words[-1])=='g':
labels.append(1)
else:
labels.append(-1)
f.close()
mat_train=mat[:30]
mat_test=mat[30:110]
lab_train=sg.BinaryLabels(np.array(labels[:30]).reshape((30,)))
lab_test=sg.BinaryLabels(np.array(labels[30:110]).reshape((len(labels[30:110]),)))
feats_train = sg.features(np.array(mat_train).T)
feats_test = sg.features(np.array(mat_test).T)
#without normalization
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(0.1))
gaussian_kernel.init(feats_train, feats_train)
C=1
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=lab_train)
_=svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print('Error:', error)
#set normalization
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(0.1))
# TODO: currently there is a bug that makes it impossible to use Gaussian kernels and kernel normalisers
# See github issue #3504
#gaussian_kernel.set_normalizer(sg.SqrtDiagKernelNormalizer())
gaussian_kernel.init(feats_train, feats_train)
svm.put('kernel', gaussian_kernel)
svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print('Error with normalization:', error)
Explanation: Kernel Normalizers
Kernel normalizers post-process kernel values by carrying out normalization in feature space. Since kernel based SVMs use a non-linear mapping, in most cases any normalization in input space is lost in feature space. Kernel normalizers are a possible solution to this. Kernel Normalization is not strictly-speaking a form of preprocessing since it is not applied directly on the input vectors but can be seen as a kernel interpretation of the preprocessing. The KernelNormalizer class provides tools for kernel normalization. Some of the kernel normalizers in Shogun:
SqrtDiagKernelNormalizer : This normalization in the feature space amounts to defining a new kernel $k'({\bf x},{\bf x'}) = \frac{k({\bf x},{\bf x'})}{\sqrt{k({\bf x},{\bf x})k({\bf x'},{\bf x'})}}$
AvgDiagKernelNormalizer : Scaling with a constant $k({\bf x},{\bf x'})= \frac{1}{c}\cdot k({\bf x},{\bf x'})$
ZeroMeanCenterKernelNormalizer : Centers the kernel in feature space and ensures each feature must have zero mean after centering.
The set_normalizer() method of Kernel is used to add a normalizer.
Let us try it out on the ionosphere dataset where we use a small training set of 30 samples to train our SVM. Gaussian kernel with and without normalization is used. See reference [1] for details.
End of explanation
num=30;
num_components=4
means=np.zeros((num_components, 2))
means[0]=[-1.5,1.5]
means[1]=[1.5,-1.5]
means[2]=[-1.5,-1.5]
means[3]=[1.5,1.5]
covs=np.array([[1.0,0.0],[0.0,1.0]])
gmm=sg.GMM(num_components)
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
[gmm.set_nth_cov(covs,i) for i in range(num_components)]
gmm.put('m_coefficients', np.array([1.0,0.0,0.0,0.0]))
xntr=np.array([gmm.sample() for i in range(num)]).T
xnte=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,1.0,0.0,0.0]))
xntr1=np.array([gmm.sample() for i in range(num)]).T
xnte1=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,0.0,1.0,0.0]))
xptr=np.array([gmm.sample() for i in range(num)]).T
xpte=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,0.0,0.0,1.0]))
xptr1=np.array([gmm.sample() for i in range(num)]).T
xpte1=np.array([gmm.sample() for i in range(5000)]).T
traindata=np.concatenate((xntr,xntr1,xptr,xptr1), axis=1)
testdata=np.concatenate((xnte,xnte1,xpte,xpte1), axis=1)
l0 = np.array([0.0 for i in range(num)])
l1 = np.array([1.0 for i in range(num)])
l2 = np.array([2.0 for i in range(num)])
l3 = np.array([3.0 for i in range(num)])
trainlab=np.concatenate((l0,l1,l2,l3))
testlab=np.concatenate((l0,l1,l2,l3))
plt.title('Toy data for multiclass classification')
plt.jet()
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=75)
feats_train=sg.features(traindata)
labels=sg.MulticlassLabels(trainlab)
Explanation: Multiclass classification
Multiclass classification can be done using SVM by reducing the problem to binary classification. More on multiclass reductions in this notebook. CGMNPSVM class provides a built in one vs rest multiclass classification using GMNPlib. Let us see classification using it on four classes. CGMM class is used to sample the data.
End of explanation
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(2))
poly_kernel=sg.kernel('PolyKernel', degree=4, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel=sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
svm=sg.GMNPSVM(1, gaussian_kernel, labels)
_=svm.train(feats_train)
size=100
x1=np.linspace(-6, 6, size)
x2=np.linspace(-6, 6, size)
x, y=np.meshgrid(x1, x2)
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Multiclass Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.set_kernel(kernels[i])
svm.train(feats_train)
grid_out=svm.apply(grid)
z=grid_out.get_labels().reshape((size, size))
plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
Explanation: Let us try the multiclass classification for different kernels.
End of explanation |
537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: Check that the folder exists
Step4: List of data files in data_dir
Step5: Data load
Initial loading of the data
Step6: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step7: We need to define some parameters
Step8: We should check if everithing is OK with an alternation histogram
Step9: If the plot looks good we can apply the parameters with
Step10: Measurements infos
All the measurement data is in the d variable. We can print it
Step11: Or check the measurements duration
Step12: Compute background
Compute the background using automatic threshold
Step13: Burst search and selection
Step14: Preliminary selection and plots
Step15: A-direct excitation fitting
To extract the A-direct excitation coefficient we need to fit the
S values for the A-only population.
The S value for the A-only population is fitted with different methods
Step16: Zero threshold on nd
Select bursts with
Step17: Selection 1
Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a
Gaussian fitted to the $S$ histogram of the FRET population.
Step18: Selection 2
Bursts are here weighted using weights $w$
Step19: Selection 3
Bursts are here selected according to
Step20: Save data to file
Step21: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step22: This is just a trick to format the different variables | Python Code:
ph_sel_name = "all-ph"
data_id = "7d"
# ph_sel_name = "all-ph"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:37:05 2017
Duration: 9 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
Explanation: Data folder:
End of explanation
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Check that the folder exists:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
file_list
## Selection for POLIMI 2012-12-6 dataset
# file_list.pop(2)
# file_list = file_list[1:-2]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for P.E. 2012-12-6 dataset
# file_list.pop(1)
# file_list = file_list[:-1]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'AexAem': Ph_sel(Aex='Aem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
Explanation: List of data files in data_dir:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
from mpl_toolkits.axes_grid1 import AxesGrid
import lmfit
print('lmfit version:', lmfit.__version__)
assert d.dir_ex == 0
assert d.leakage == 0
d.burst_search(m=10, F=6, ph_sel=ph_sel)
print(d.ph_sel, d.num_bursts)
ds_sa = d.select_bursts(select_bursts.naa, th1=30)
ds_sa.num_bursts
Explanation: Burst search and selection
End of explanation
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
ds_sas0 = ds_sa.select_bursts(select_bursts.S, S2=0.10)
ds_sas = ds_sa.select_bursts(select_bursts.S, S2=0.15)
ds_sas2 = ds_sa.select_bursts(select_bursts.S, S2=0.20)
ds_sas3 = ds_sa.select_bursts(select_bursts.S, S2=0.25)
ds_st = d.select_bursts(select_bursts.size, add_naa=True, th1=30)
ds_sas.num_bursts
dx = ds_sas0
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas2
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas3
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
plt.title('(nd + na) for A-only population using different S cutoff');
dx = ds_sa
alex_jointplot(dx);
dplot(ds_sa, hist_S)
Explanation: Preliminary selection and plots
End of explanation
dx = ds_sa
bin_width = 0.03
bandwidth = 0.03
bins = np.r_[-0.2 : 1 : bin_width]
x_kde = np.arange(bins.min(), bins.max(), 0.0002)
## Weights
weights = None
## Histogram fit
fitter_g = mfit.MultiFitter(dx.S)
fitter_g.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_g.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_hist_orig = fitter_g.hist_pdf
S_2peaks = fitter_g.params.loc[0, 'p1_center']
dir_ex_S2p = S_2peaks/(1 - S_2peaks)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p)
## KDE
fitter_g.calc_kde(bandwidth=bandwidth)
fitter_g.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak = fitter_g.kde_max_pos[0]
dir_ex_S_kde = S_peak/(1 - S_peak)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=True)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak*100));
## 2-Asym-Gaussian
fitter_ag = mfit.MultiFitter(dx.S)
fitter_ag.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_ag.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.1, p2_center=0.4))
#print(fitter_ag.fit_obj[0].model.fit_report())
S_2peaks_a = fitter_ag.params.loc[0, 'p1_center']
dir_ex_S2pa = S_2peaks_a/(1 - S_2peaks_a)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2pa)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_ag, ax=ax[1])
ax[1].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_a*100));
Explanation: A-direct excitation fitting
To extract the A-direct excitation coefficient we need to fit the
S values for the A-only population.
The S value for the A-only population is fitted with different methods:
- Histogram git with 2 Gaussians or with 2 asymmetric Gaussians
(an asymmetric Gaussian has right- and left-side of the peak
decreasing according to different sigmas).
- KDE maximum
In the following we apply these methods using different selection
or weighting schemes to reduce amount of FRET population and make
fitting of the A-only population easier.
Even selection
Here A-only and FRET population are evenly selected.
End of explanation
dx = ds_sa.select_bursts(select_bursts.nd, th1=-100, th2=0)
fitter = bext.bursts_fitter(dx, 'S')
fitter.fit_histogram(model = mfit.factory_gaussian(center=0.1))
S_1peaks_th = fitter.params.loc[0, 'center']
dir_ex_S1p = S_1peaks_th/(1 - S_1peaks_th)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S1p)
mfit.plot_mfit(fitter)
plt.xlim(-0.1, 0.6)
Explanation: Zero threshold on nd
Select bursts with:
$$n_d < 0$$.
End of explanation
dx = ds_sa
## Weights
weights = 1 - mfit.gaussian(dx.S[0], fitter_g.params.loc[0, 'p2_center'], fitter_g.params.loc[0, 'p2_sigma'])
weights[dx.S[0] >= fitter_g.params.loc[0, 'p2_center']] = 0
## Histogram fit
fitter_w1 = mfit.MultiFitter(dx.S)
fitter_w1.weights = [weights]
fitter_w1.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w1.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w1 = fitter_w1.params.loc[0, 'p1_center']
dir_ex_S2p_w1 = S_2peaks_w1/(1 - S_2peaks_w1)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w1)
## KDE
fitter_w1.calc_kde(bandwidth=bandwidth)
fitter_w1.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w1 = fitter_w1.kde_max_pos[0]
dir_ex_S_kde_w1 = S_peak_w1/(1 - S_peak_w1)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w1)
def plot_weights(x, weights, ax):
ax2 = ax.twinx()
x_sort = x.argsort()
ax2.plot(x[x_sort], weights[x_sort], color='k', lw=4, alpha=0.4)
ax2.set_ylabel('Weights');
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w1, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w1*100))
mfit.plot_mfit(fitter_w1, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w1*100));
Explanation: Selection 1
Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a
Gaussian fitted to the $S$ histogram of the FRET population.
End of explanation
## Weights
sizes = dx.nd[0] + dx.na[0] #- dir_ex_S_kde_w3*dx.naa[0]
weights = dx.naa[0] - abs(sizes)
weights[weights < 0] = 0
## Histogram
fitter_w4 = mfit.MultiFitter(dx.S)
fitter_w4.weights = [weights]
fitter_w4.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w4.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w4 = fitter_w4.params.loc[0, 'p1_center']
dir_ex_S2p_w4 = S_2peaks_w4/(1 - S_2peaks_w4)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w4)
## KDE
fitter_w4.calc_kde(bandwidth=bandwidth)
fitter_w4.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w4 = fitter_w4.kde_max_pos[0]
dir_ex_S_kde_w4 = S_peak_w4/(1 - S_peak_w4)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w4)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w4, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w4*100))
mfit.plot_mfit(fitter_w4, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w4*100));
Explanation: Selection 2
Bursts are here weighted using weights $w$:
$$w = n_{aa} - |n_a + n_d|$$
End of explanation
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
print(ds_saw.num_bursts)
dx = ds_saw
## Weights
weights = None
## 2-Gaussians
fitter_w5 = mfit.MultiFitter(dx.S)
fitter_w5.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w5 = fitter_w5.params.loc[0, 'p1_center']
dir_ex_S2p_w5 = S_2peaks_w5/(1 - S_2peaks_w5)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w5)
## KDE
fitter_w5.calc_kde(bandwidth=bandwidth)
fitter_w5.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w5 = fitter_w5.kde_max_pos[0]
S_2peaks_w5_fiterr = fitter_w5.fit_res[0].params['p1_center'].stderr
dir_ex_S_kde_w5 = S_peak_w5/(1 - S_peak_w5)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w5)
## 2-Asym-Gaussians
fitter_w5a = mfit.MultiFitter(dx.S)
fitter_w5a.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5a.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.05, p2_center=0.3))
S_2peaks_w5a = fitter_w5a.params.loc[0, 'p1_center']
dir_ex_S2p_w5a = S_2peaks_w5a/(1 - S_2peaks_w5a)
#print(fitter_w5a.fit_obj[0].model.fit_report(min_correl=0.5))
print('Fitted direct excitation (na/naa) [2-Asym-Gauss]:', dir_ex_S2p_w5a)
fig, ax = plt.subplots(1, 3, figsize=(19, 4.5))
mfit.plot_mfit(fitter_w5, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5*100))
mfit.plot_mfit(fitter_w5, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w5*100));
mfit.plot_mfit(fitter_w5a, ax=ax[2])
mfit.plot_mfit(fitter_g, ax=ax[2], plot_model=False, plot_kde=False)
ax[2].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5a*100));
Explanation: Selection 3
Bursts are here selected according to:
$$n_{aa} - |n_a + n_d| > 30$$
End of explanation
sample = data_id
n_bursts_aa = ds_sas.num_bursts[0]
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_aa dir_ex_S1p dir_ex_S_kde dir_ex_S2p dir_ex_S2pa '
'dir_ex_S2p_w1 dir_ex_S_kde_w1 dir_ex_S_kde_w4 dir_ex_S_kde_w5 dir_ex_S2p_w5 dir_ex_S2p_w5a '
'S_2peaks_w5 S_2peaks_w5_fiterr\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-dir_ex_aa-fit-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation |
538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This workbook contains some examples for reading, analysing and plotting processed MT data. It covers most of the steps available in MTPy. For more details on specific input parameters and other functionality, we recommend looking at the mtpy documentation, which can be found at
Step1: The mt_obj contains all the data from the edi file, e.g. impedance, tipper, frequency as well as station information (lat/long). To look at any of these parameters you can type, for example
Step2: There are many other parameters you can look at in the mt_obj. Just type mt_obj.[TAB] to see what is available.
In the MT object are the Z and Tipper objects (mt_obj.Z; mt_obj.Tipper). These contain all information related to, respectively, the impedance tensor and the tipper.
Step3: As with the MT object, you can explore the object by typing mt_obj.Z.[TAB] to see the available attributes.
Plot an edi file
In this example we plot MT data from an edi file.
Step4: Make some change to the data and save to a new file
This example demonstrates how to resample the data onto new frequency values and write to a new edi file. In the example below, you can either choose every second frequency or resample onto five periods per decade.
To do this we need to make a new Z object, and save it to a file. | Python Code:
# import required modules
from mtpy.core.mt import MT
# Define the path to your edi file
edi_file = "C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi"
# Create an MT object
mt_obj = MT(edi_file)
Explanation: Introduction
This workbook contains some examples for reading, analysing and plotting processed MT data. It covers most of the steps available in MTPy. For more details on specific input parameters and other functionality, we recommend looking at the mtpy documentation, which can be found at: https://mtpy2.readthedocs.io/en/develop/.
This workbook is structured according to some of the key modules in MTPy: Core, Analysis, Imaging, and Modeling.
Getting Started
To start with, you will need to make sure MTPy is installed and is working correctly. Please see the installation guide (https://github.com/MTgeophysics/mtpy/wiki/MTPy-installation-guide-for-Windows-10-and-Ubuntu-18.04) for details.
Before you begin these examples, we suggest you make a temporary folder (e.g. C:/tmp) to save all example outputs.
Useful tricks and tips
This workbook exists as a Jupyter notebook and a pdf. If you are running the Jupyter notebook, you can run each of the cells, modifying the inputs to suit your requirements. Most of these examples have been written to be self contained.
In Jupyter, you can add the following line to the top of any cell and it will write the contents of that cell to a python script: %%writefile example.py
You can also select multiple cells and copy them to a new Jupyter notebook.
Many of the examples below make use of the matplotlib colour maps. Please see https://matplotlib.org/examples/color/colormaps_reference.html for colour map options.
Core
These first few examples cover some of the basic functions and tools that can be used to look at data contained in an edi file, plot it, and make changes (e.g. sample onto different frequencies).
Read an edi file into an MT object
End of explanation
# To see the latitude and longitude
print(mt_obj.lat, mt_obj.lon)
# To see the easting, northing, and elevation
print(mt_obj.east, mt_obj.north, mt_obj.elev)
Explanation: The mt_obj contains all the data from the edi file, e.g. impedance, tipper, frequency as well as station information (lat/long). To look at any of these parameters you can type, for example:
End of explanation
# for example, to see the frequency values represented in the impedance tensor:
print(mt_obj.Z.freq)
# or to see the impedance tensor (first 4 elements)
print(mt_obj.Z.z[:4])
# or the resistivity or phase (first 4 values)
print(mt_obj.Z.resistivity[:4])
print(mt_obj.Z.phase[:4])
Explanation: There are many other parameters you can look at in the mt_obj. Just type mt_obj.[TAB] to see what is available.
In the MT object are the Z and Tipper objects (mt_obj.Z; mt_obj.Tipper). These contain all information related to, respectively, the impedance tensor and the tipper.
End of explanation
# import required modules
from mtpy.core.mt import MT
import os
# Define the path to your edi file and save path
edi_file = "C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi"
savepath = r"C:/tmp"
# Create an MT object
mt_obj = MT(edi_file)
# To plot the edi file we read in in Part 1 & save to file:
pt_obj = mt_obj.plot_mt_response(plot_num=1, # 1 = yx and xy; 2 = all 4 components
# 3 = off diagonal + determinant
plot_tipper = 'yri',
plot_pt = 'y' # plot phase tensor 'y' or 'n'
)
#pt_obj.save_plot(os.path.join(savepath,"Synth00.png"), fig_dpi=400)
Explanation: As with the MT object, you can explore the object by typing mt_obj.Z.[TAB] to see the available attributes.
Plot an edi file
In this example we plot MT data from an edi file.
End of explanation
# import required modules
from mtpy.core.mt import MT
import os
# Define the path to your edi file and save path
edi_file = r"C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi"
savepath = r"C:/tmp"
# Create an MT object
mt_obj = MT(edi_file)
# First, define a frequency array:
# Every second frequency:
new_freq_list = mt_obj.Z.freq[::2]
# OR 5 periods per decade from 10^-4 to 10^3 seconds
from mtpy.utils.calculator import get_period_list
new_freq_list = 1./get_period_list(1e-4,1e3,5)
# Create new Z and Tipper objects containing interpolated data
new_Z_obj, new_Tipper_obj = mt_obj.interpolate(new_freq_list)
# Write a new edi file using the new data
mt_obj.write_mt_file(
save_dir=savepath,
fn_basename='Synth00_5ppd',
file_type='edi',
new_Z_obj=new_Z_obj, # provide a z object to update the data
new_Tipper_obj=new_Tipper_obj, # provide a tipper object
longitude_format='LONG', # write longitudes as 'LONG' not ‘LON’
latlon_format='dd'# write as decimal degrees (any other input
# will write as degrees:minutes:seconds
)
Explanation: Make some change to the data and save to a new file
This example demonstrates how to resample the data onto new frequency values and write to a new edi file. In the example below, you can either choose every second frequency or resample onto five periods per decade.
To do this we need to make a new Z object, and save it to a file.
End of explanation |
539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nucleic acids structure analysis
analysis of the nucleic acids backbone torsion angles.
The 'nucleic_acid_torsion' function can be used to compute the backbone torsion angles. For example
Step1: plot the backbone torsion angle
The 'plot_torsion_wheel' function provide the plot of the backbone torsion angle of a nuclei acids molecules. usage
Step2: Analysis the puckering of nucleic acids.
Step3: analysis of the protein backbone torsion angles.
Step4: Compute NOEs
The Nuclear Overhauser Effect is based on the distances between hydrogen atoms from NOESY NMR spectra. The 'get_NOE' function can compute the distance of hydrogen atoms between 2 molecules and compute the hydrogen atom pairs which are within the NOE effect range (0-6.5 angstrom) The NOE effect are classfied as 'weak' (3.5,6.5),'medium' (2.6,5.0) and 'strong' (1.8,3.6), which are also been used as arguments for the 'get_NOE' function. One can also compute all of the NOE effect by providing an argument as 'all'. For example
Step5: Analysis of a bunch of pdb files
The 'ls' class provide a simple way to get a bunch of pdb files, e.g., ls(dir).pdb, will return all of the pdb files in 'dir' directory. | Python Code:
from SBio import *
s3 = create_molecule('D:\\python\\structural bioinformatics_in_python\\examples\\S3.pdb').m1
torsions = nucleic_acid_torsion(s3, ('A','B'),(1,12))
print(torsions[1]) # residue.serial , α, β, γ, δ, ε, ξ, χ,
Explanation: Nucleic acids structure analysis
analysis of the nucleic acids backbone torsion angles.
The 'nucleic_acid_torsion' function can be used to compute the backbone torsion angles. For example:
End of explanation
%matplotlib inline
import matplotlib
from SBio import *
s3 = create_molecule('D:\\python\\structural bioinformatics_in_python\\examples\\S3.pdb')
torsions = nucleic_acid_torsion_plot(s3, ('A','B'),(1,12))
plot_torsion_wheel(torsions, 'torsion', abz=False, show = True)
Explanation: plot the backbone torsion angle
The 'plot_torsion_wheel' function provide the plot of the backbone torsion angle of a nuclei acids molecules. usage:
plot_torsion_wheel(angles, title, filename='1.png', abz=True, show = False)
arguments:
title: the displaying title
filename: save file name
abz: displaying the allowing range of 'A', 'B' and 'Z' DNA
show: directly display or save the figure
End of explanation
from SBio import *
S3 = create_molecule('D:\\python\\structural bioinformatics_in_python\\examples\\S3.pdb')
pucker = nucleic_acid_pucker(S3.m1,('A','B'),(1,12))
print(pucker[0])
Explanation: Analysis the puckering of nucleic acids.
End of explanation
M1 = create_molecule('D:\\python\\structural bioinformatics_in_python\\examples\\1sez.pdb').m1
torsion = protein_tosion(M1, ('A','B'), (1,485))
plot_phi_psi(torsion, 'phi_psi', 'phi_psi.png', True)
Explanation: analysis of the protein backbone torsion angles.
End of explanation
m_g4 = create_molecule('D:\\python\\structural bioinformatics_in_python\\examples\\CMA.pdb').m1
m_lig = create_molecule('D:\\python\\structural bioinformatics_in_python\\examples\\daota-m2.pdb').m1
NOE = get_NOE(m_g4, m_lig,'strong')
for i in NOE[:5]:
print(i)
Explanation: Compute NOEs
The Nuclear Overhauser Effect is based on the distances between hydrogen atoms from NOESY NMR spectra. The 'get_NOE' function can compute the distance of hydrogen atoms between 2 molecules and compute the hydrogen atom pairs which are within the NOE effect range (0-6.5 angstrom) The NOE effect are classfied as 'weak' (3.5,6.5),'medium' (2.6,5.0) and 'strong' (1.8,3.6), which are also been used as arguments for the 'get_NOE' function. One can also compute all of the NOE effect by providing an argument as 'all'. For example:
End of explanation
structures = "D:\\python\structural bioinformatics_in_python\\examples\\ensembles"
dihedral = []
for pdb in ls(structures).pdb: #get all of the pdb files in 'structures'
m = create_molecule(pdb).m1
dihedral.append(get_torsion(m.A3.C, m.A3.CA, m.A4.N, m.A4.CA))
for x, y in enumerate(dihedral):
print(x+1, y)
Explanation: Analysis of a bunch of pdb files
The 'ls' class provide a simple way to get a bunch of pdb files, e.g., ls(dir).pdb, will return all of the pdb files in 'dir' directory.
End of explanation |
540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiple Stripe Analysis (MSA) for Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis using a suite of ground motion records scaled to multple stripes of intensity measure. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates the results of a Multiple Stripe Analysis, from which the fragility function is built.
<img src="../../../../figures/MSA_example.jpg" width="500" align="middle">
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
Step2: Load ground motion records
For what concerns the ground motions to be used in th Multiple Stripe Analysis the following inputs are required
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
Step4: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step8: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step9: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
import MSA_on_SDOF
from rmtk.vulnerability.common import utils
import numpy as np
import MSA_utils
%matplotlib inline
Explanation: Multiple Stripe Analysis (MSA) for Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis using a suite of ground motion records scaled to multple stripes of intensity measure. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates the results of a Multiple Stripe Analysis, from which the fragility function is built.
<img src="../../../../figures/MSA_example.jpg" width="500" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = '../../../../../rmtk_data/capacity_curves_sdof_first_mode.csv'
sdof_hysteresis = "../../../../../rmtk_data/pinching_parameters.csv"
from read_pinching_parameters import read_parameters
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
End of explanation
gmrs_folder = "../../../../../rmtk_data/MSA_records"
minT, maxT = 0.1, 2.0
no_bins = 10
no_rec_bin = 30
record_scaled_folder = "../../../../../rmtk_data/Scaling_factors"
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
For what concerns the ground motions to be used in th Multiple Stripe Analysis the following inputs are required:
1. gmrs_folder: path to the folder containing the ground motion records to be used in the analysis. Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
2. record_scaled_folder. In this folder there should be a csv file for each Intensity Measure bin selected for the MSA, containing the names of the records that should be scaled to that IM bin, and the corresponding scaling factors. An example of this type of file is provided in the RMTK manual.
3. no_bins: number of Intensity Measure bins.
4. no_rec_bin: number of records per bin
If the user wants to plot acceleration, displacement and velocity response spectra, the function utils.plot_response_spectra(gmrs, minT, maxT) should be un-commented. The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../rmtk_data/damage_model_Sd.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
End of explanation
damping_ratio = 0.05
degradation = False
msa = {}; msa['n. bins']=no_bins; msa['records per bin']=no_rec_bin; msa['input folder']=record_scaled_folder
PDM, Sds, IML_info = MSA_on_SDOF.calculate_fragility(capacity_curves, hysteresis, msa, gmrs,
damage_model, damping_ratio, degradation)
Explanation: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix:
1. damping_ratio: This parameter defines the damping ratio for the structure.
2. degradation: This boolean parameter should be set to True or False to specify whether structural degradation should be considered in the analysis or not.
End of explanation
import MSA_post_processing
IMT = "Sa"
T = 0.47
#T = np.arange(0.4,1.91,0.01)
regression_method = "max likelihood"
fragility_model = MSA_utils.calculate_fragility_model(PDM,gmrs,IML_info,IMT,msa,damage_model,
T,damping_ratio, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa","Sd" and "HI" (Housner Intensity).
2. period: This parameter defines the period for which a spectral intensity measure should be computed. If Housner Intensity is selected as intensity measure a range of periods should be defined instead (for example T=np.arange(0.3,3.61,0.01)).
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 4
utils.plot_fragility_MSA(fragility_model, minIML, maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC_1st"
output_type = "csv"
output_path = "../../../../../rmtk_data/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../rmtk_data/cons_model.csv"
imls = np.linspace(minIML, maxIML, 20)
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
scikit-learn-k-means
Credits
Step1: K-Means Clustering
Step2: K Means is an algorithm for unsupervised clustering
Step3: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known Expectation Maximization (EM) procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
Step4: The algorithm identifies the four clusters of points in a manner very similar to what we would do by eye!
The K-Means Algorithm | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn;
from sklearn.linear_model import LinearRegression
from scipy import stats
import pylab as pl
seaborn.set()
Explanation: scikit-learn-k-means
Credits: Forked from PyCon 2015 Scikit-learn Tutorial by Jake VanderPlas
End of explanation
from sklearn import neighbors, datasets
iris = datasets.load_iris()
X, y = iris.data, iris.target
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
X_reduced = pca.transform(X)
print("Reduced dataset shape:", X_reduced.shape)
import pylab as pl
pl.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y,
cmap='RdYlBu')
print("Meaning of the 2 components:")
for component in pca.components_:
print(" + ".join("%.3f x %s" % (value, name)
for value, name in zip(component,
iris.feature_names)))
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0) # Fixing the RNG in kmeans
k_means.fit(X)
y_pred = k_means.predict(X)
pl.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y_pred,
cmap='RdYlBu');
Explanation: K-Means Clustering
End of explanation
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], s=50);
Explanation: K Means is an algorithm for unsupervised clustering: that is, finding clusters in data based on the data attributes alone (not the labels).
K Means is a relatively easy-to-understand algorithm. It searches for cluster centers which are the mean of the points within them, such that every point is closest to the cluster center it is assigned to.
Let's look at how KMeans operates on the simple clusters we looked at previously. To emphasize that this is unsupervised, we'll not plot the colors of the clusters:
End of explanation
from sklearn.cluster import KMeans
est = KMeans(4) # 4 clusters
est.fit(X)
y_kmeans = est.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='rainbow');
Explanation: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known Expectation Maximization (EM) procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
End of explanation
from fig_code import plot_kmeans_interactive
plot_kmeans_interactive();
Explanation: The algorithm identifies the four clusters of points in a manner very similar to what we would do by eye!
The K-Means Algorithm: Expectation Maximization
K-Means is an example of an algorithm which uses an Expectation-Maximization approach to arrive at the solution.
Expectation-Maximization is a two-step approach which works as follows:
Guess some cluster centers
Repeat until converged
A. Assign points to the nearest cluster center
B. Set the cluster centers to the mean
Let's quickly visualize this process:
End of explanation |
542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mass-univariate twoway repeated measures ANOVA on single trial power
This script shows how to conduct a mass-univariate repeated measures
ANOVA. As the model to be fitted assumes two fully crossed factors,
we will study the interplay between perceptual modality
(auditory VS visual) and the location of stimulus presentation
(left VS right). Here we use single trials as replications
(subjects) while iterating over time slices plus frequency bands
for to fit our mass-univariate model. For the sake of simplicity we
will confine this analysis to one single channel of which we know
that it exposes a strong induced response. We will then visualize
each effect by creating a corresponding mass-univariate effect
image. We conclude with accounting for multiple comparisons by
performing a permutation clustering test using the ANOVA as
clustering function. The results final will be compared to
multiple comparisons using False Discovery Rate correction.
Step1: Set parameters
Step2: We have to make sure all conditions have the same counts, as the ANOVA
expects a fully balanced data matrix and does not forgive imbalances that
generously (risk of type-I error).
Step3: Create TFR representations for all conditions
Step4: Setup repeated measures ANOVA
We will tell the ANOVA how to interpret the data matrix in terms of factors.
This is done via the factor levels argument which is a list of the number
factor levels for each factor.
Step5: Now we'll assemble the data matrix and swap axes so the trial replications
are the first dimension and the conditions are the second dimension.
Step6: While the iteration scheme used above for assembling the data matrix
makes sure the first two dimensions are organized as expected (with A =
modality and B = location)
Step7: Account for multiple comparisons using FDR versus permutation clustering test
First we need to slightly modify the ANOVA function to be suitable for
the clustering procedure. Also want to set some defaults.
Let's first override effects to confine the analysis to the interaction
Step8: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions
Step9: Create new stats image with only significant clusters
Step10: Now using FDR | Python Code:
# Authors: Denis Engemann <[email protected]>
# Eric Larson <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import f_threshold_mway_rm, f_mway_rm, fdr_correction
from mne.datasets import sample
print(__doc__)
Explanation: Mass-univariate twoway repeated measures ANOVA on single trial power
This script shows how to conduct a mass-univariate repeated measures
ANOVA. As the model to be fitted assumes two fully crossed factors,
we will study the interplay between perceptual modality
(auditory VS visual) and the location of stimulus presentation
(left VS right). Here we use single trials as replications
(subjects) while iterating over time slices plus frequency bands
for to fit our mass-univariate model. For the sake of simplicity we
will confine this analysis to one single channel of which we know
that it exposes a strong induced response. We will then visualize
each effect by creating a corresponding mass-univariate effect
image. We conclude with accounting for multiple comparisons by
performing a permutation clustering test using the ANOVA as
clustering function. The results final will be compared to
multiple comparisons using False Discovery Rate correction.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
tmin, tmax = -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443'] # bads
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = 'MEG 1332'
# Load conditions
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0), preload=True,
reject=reject)
epochs.pick_channels([ch_name]) # restrict example to one channel
Explanation: Set parameters
End of explanation
epochs.equalize_event_counts(event_id)
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet.
decim = 2
frequencies = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = frequencies / frequencies[0]
zero_mean = False # don't correct morlet wavelet to be of mean zero
# To have a true wavelet zero_mean should be True but here for illustration
# purposes it helps to spot the evoked response.
Explanation: We have to make sure all conditions have the same counts, as the ANOVA
expects a fully balanced data matrix and does not forgive imbalances that
generously (risk of type-I error).
End of explanation
epochs_power = list()
for condition in [epochs[k] for k in event_id]:
this_tfr = tfr_morlet(condition, frequencies, n_cycles=n_cycles,
decim=decim, average=False, zero_mean=zero_mean,
return_itc=False)
this_tfr.apply_baseline(mode='ratio', baseline=(None, 0))
this_power = this_tfr.data[:, 0, :, :] # we only have one channel.
epochs_power.append(this_power)
Explanation: Create TFR representations for all conditions
End of explanation
n_conditions = len(epochs.event_id)
n_replications = epochs.events.shape[0] / n_conditions
factor_levels = [2, 2] # number of levels in each factor
effects = 'A*B' # this is the default signature for computing all effects
# Other possible options are 'A' or 'B' for the corresponding main effects
# or 'A:B' for the interaction effect only (this notation is borrowed from the
# R formula language)
n_frequencies = len(frequencies)
times = 1e3 * epochs.times[::decim]
n_times = len(times)
Explanation: Setup repeated measures ANOVA
We will tell the ANOVA how to interpret the data matrix in terms of factors.
This is done via the factor levels argument which is a list of the number
factor levels for each factor.
End of explanation
data = np.swapaxes(np.asarray(epochs_power), 1, 0)
# reshape last two dimensions in one mass-univariate observation-vector
data = data.reshape(n_replications, n_conditions, n_frequencies * n_times)
# so we have replications * conditions * observations:
print(data.shape)
Explanation: Now we'll assemble the data matrix and swap axes so the trial replications
are the first dimension and the conditions are the second dimension.
End of explanation
fvals, pvals = f_mway_rm(data, factor_levels, effects=effects)
effect_labels = ['modality', 'location', 'modality by location']
# let's visualize our effects by computing f-images
for effect, sig, effect_label in zip(fvals, pvals, effect_labels):
plt.figure()
# show naive F-values in gray
plt.imshow(effect.reshape(8, 211), cmap=plt.cm.gray, extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
# create mask for significant Time-frequency locations
effect = np.ma.masked_array(effect, [sig > .05])
plt.imshow(effect.reshape(8, 211), cmap='RdBu_r', extent=[times[0],
times[-1], frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(r"Time-locked response for '%s' (%s)" % (effect_label, ch_name))
plt.show()
Explanation: While the iteration scheme used above for assembling the data matrix
makes sure the first two dimensions are organized as expected (with A =
modality and B = location):
.. table:: Sample data layout
===== ==== ==== ==== ====
trial A1B1 A1B2 A2B1 B2B2
===== ==== ==== ==== ====
1 1.34 2.53 0.97 1.74
... ... ... ... ...
56 2.45 7.90 3.09 4.76
===== ==== ==== ==== ====
Now we're ready to run our repeated measures ANOVA.
Note. As we treat trials as subjects, the test only accounts for
time locked responses despite the 'induced' approach.
For analysis for induced power at the group level averaged TRFs
are required.
End of explanation
effects = 'A:B'
Explanation: Account for multiple comparisons using FDR versus permutation clustering test
First we need to slightly modify the ANOVA function to be suitable for
the clustering procedure. Also want to set some defaults.
Let's first override effects to confine the analysis to the interaction
End of explanation
def stat_fun(*args):
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=False)[0]
# The ANOVA returns a tuple f-values and p-values, we will pick the former.
pthresh = 0.00001 # set threshold rather high to save some time
f_thresh = f_threshold_mway_rm(n_replications, factor_levels, effects,
pthresh)
tail = 1 # f-test, so tail > 0
n_permutations = 256 # Save some time (the test won't be too sensitive ...)
T_obs, clusters, cluster_p_values, h0 = mne.stats.permutation_cluster_test(
epochs_power, stat_fun=stat_fun, threshold=f_thresh, tail=tail, n_jobs=1,
n_permutations=n_permutations, buffer_size=None)
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects X conditions X observations (optional).
The following function catches the list input and swaps the first and
the second dimension and finally calls the ANOVA function.
End of explanation
good_clusers = np.where(cluster_p_values < .05)[0]
T_obs_plot = np.ma.masked_array(T_obs,
np.invert(clusters[np.squeeze(good_clusers)]))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" cluster-level corrected (p <= 0.05)" % ch_name)
plt.show()
Explanation: Create new stats image with only significant clusters:
End of explanation
mask, _ = fdr_correction(pvals[2])
T_obs_plot2 = np.ma.masked_array(T_obs, np.invert(mask))
plt.figure()
for f_image, cmap in zip([T_obs, T_obs_plot2], [plt.cm.gray, 'RdBu_r']):
plt.imshow(f_image, cmap=cmap, extent=[times[0], times[-1],
frequencies[0], frequencies[-1]], aspect='auto',
origin='lower')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title("Time-locked response for 'modality by location' (%s)\n"
" FDR corrected (p <= 0.05)" % ch_name)
plt.show()
Explanation: Now using FDR:
End of explanation |
543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook was created by Sergey Tomin for Workshop
Step1: Change RF parameters for the comparison with ASTRA
Step2: Initializing SpaceCharge
Step3: Comparison with ASTRA
Beam tracking with ASTRA was performed by Igor Zagorodnov (DESY). | Python Code:
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
from time import time
# this python library provides generic shallow (copy) and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot main modules and functions
from ocelot import *
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# import injector lattice
from ocelot.test.workshop.injector_lattice import *
# load beam distribution
# this function convert Astra beam distribution to Ocelot format - ParticleArray. ParticleArray is designed for tracking.
# in order to work with converters we have to import specific module from ocelot.adaptors
from ocelot.adaptors.astra2ocelot import *
Explanation: This notebook was created by Sergey Tomin for Workshop: Designing future X-ray FELs. Source and license info is on GitHub. August 2016.
Tutorial N3. Space Charge.
Second order tracking with space charge effect of the 200k particles.
As an example, we will use lattice file (converted to Ocelot format) of the European XFEL Injector.
The space charge forces are calculated by solving the Poisson equation in the bunch frame.
Then the Lorentz transformed electromagnetic field is applied as a kick in the laboratory frame.
For the solution of the Poisson equation we use an integral representation of the electrostatic potential by convolution of the free-space Green's function with the charge distribution. The convolution equation is solved with the help of the Fast Fourier Transform (FFT). The same algorithm for solution of the 3D Poisson equation is used, for example, in ASTRA.
This example will cover the following topics:
Initialization of the Space Charge objects and the places of their applying
tracking of second order with space charge effect.
Requirements
injector_lattice.py - input file, the The European XFEL Injector lattice.
beam_6MeV.ast - input file, initial beam distribution in ASTRA format (was obtained from s2e simulation performed with ASTRA).
Import of modules
End of explanation
phi1=18.7268
V1=18.50662e-3/np.cos(phi1*pi/180)
C_A1_1_1_I1.v = V1; C_A1_1_1_I1.phi = phi1
C_A1_1_2_I1.v = V1; C_A1_1_2_I1.phi = phi1
C_A1_1_3_I1.v = V1; C_A1_1_3_I1.phi = phi1
C_A1_1_4_I1.v = V1; C_A1_1_4_I1.phi = phi1
C_A1_1_5_I1.v = V1; C_A1_1_5_I1.phi = phi1
C_A1_1_6_I1.v = V1; C_A1_1_6_I1.phi = phi1
C_A1_1_7_I1.v = V1; C_A1_1_7_I1.phi = phi1
C_A1_1_8_I1.v = V1; C_A1_1_8_I1.phi = phi1
phi13=180
V13=-20.2E-3/8/np.cos(phi13*pi/180)
C3_AH1_1_1_I1.v=V13; C3_AH1_1_1_I1.phi=phi13
C3_AH1_1_2_I1.v=V13; C3_AH1_1_2_I1.phi=phi13
C3_AH1_1_3_I1.v=V13; C3_AH1_1_3_I1.phi=phi13
C3_AH1_1_4_I1.v=V13; C3_AH1_1_4_I1.phi=phi13
C3_AH1_1_5_I1.v=V13; C3_AH1_1_5_I1.phi=phi13
C3_AH1_1_6_I1.v=V13; C3_AH1_1_6_I1.phi=phi13
C3_AH1_1_7_I1.v=V13; C3_AH1_1_7_I1.phi=phi13
C3_AH1_1_8_I1.v=V13; C3_AH1_1_8_I1.phi=phi13
p_array_init = astraBeam2particleArray(filename='beam_6MeV.ast')
# initialization of tracking method
method = MethodTM()
# for second order tracking we have to choose SecondTM
method.global_method = SecondTM
# for first order tracking uncomment next line
# method.global_method = TransferMap
# we will start simulation from point 3.2 from the gun. For this purpose marker was created (start_sim=Marker())
# and placed in 3.2 m after gun
# Q_38_I1 is quadrupole between RF cavities 1.3 GHz and 3.9 GHz
# C3_AH1_1_8_I1 is the last section of the 3.9 GHz cavity
lat = MagneticLattice(cell, start=start_sim, stop=Q_38_I1, method=method)
Explanation: Change RF parameters for the comparison with ASTRA
End of explanation
sc1 = SpaceCharge()
sc1.nmesh_xyz = [63, 63, 63]
sc1.low_order_kick = False
sc1.step = 1
sc5 = SpaceCharge()
sc5.nmesh_xyz = [63, 63, 63]
sc5.step = 5
sc5.low_order_kick = False
navi = Navigator(lat)
# add physics processes from the first element to the last of the lattice
navi.add_physics_proc(sc1, lat.sequence[0], C_A1_1_2_I1)
navi.add_physics_proc(sc5, C_A1_1_2_I1, lat.sequence[-1])
# definiing of unit step in [m]
navi.unit_step = 0.02
# deep copy of the initial beam distribution
p_array = deepcopy(p_array_init)
start = time()
tws_track, p_array = track(lat, p_array, navi)
print("time exec: ", time() - start, "sec")
# you can change top_plot argument, for example top_plot=["alpha_x", "alpha_y"]
plot_opt_func(lat, tws_track, top_plot=["E"], fig_name=0, legend=False)
plt.show()
Explanation: Initializing SpaceCharge
End of explanation
sa, bx_sc, by_sc, bx_wo_sc, by_wo_sc = np.loadtxt("astra_sim.txt", usecols=(0, 1, 2, 3, 4), unpack=True)
s = [tw.s for tw in tws_track]
bx = [tw.beta_x for tw in tws_track]
by = [tw.beta_y for tw in tws_track]
ax = plot_API(lat, legend=False)
ax.plot(s, bx, "r", label="Ocelot, bx")
ax.plot(sa-3.2, bx_sc, "b-",label="ASTRA, bx")
ax.plot(s, by, "r", label="Ocelot, by")
ax.plot(sa-3.2, by_sc, "b-",label="ASTRA, by")
ax.legend()
plt.show()
Explanation: Comparison with ASTRA
Beam tracking with ASTRA was performed by Igor Zagorodnov (DESY).
End of explanation |
544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
< 3. Traitement de données | Contents | 6. Analyse statistique >
Step1: Géocodage
Le géocodage consiste à obtenir les points de référence géographique d'objets du monde réel. Un cas intéressant est celui des adresses physiques.
Il est possible de faire du géocodage à la main dans les outils cartographique publiques tels que Google ou OpenStreetMap. Il est aussi possible d'utiliser des bibliothèques Python comme geopandas pour faire du géocodage systématique. Le service nominatim d'OpenStreetMap permet le géocodage.
Autre exemple de geopandas | Python Code:
import geopandas
Explanation: < 3. Traitement de données | Contents | 6. Analyse statistique >
End of explanation
geopandas.tools.geocode('2900 boulevard Edouard Montpetit, Montreal', provider='nominatim', user_agent="mon-application")
Explanation: Géocodage
Le géocodage consiste à obtenir les points de référence géographique d'objets du monde réel. Un cas intéressant est celui des adresses physiques.
Il est possible de faire du géocodage à la main dans les outils cartographique publiques tels que Google ou OpenStreetMap. Il est aussi possible d'utiliser des bibliothèques Python comme geopandas pour faire du géocodage systématique. Le service nominatim d'OpenStreetMap permet le géocodage.
Autre exemple de geopandas: https://geopandas.org/geocoding.html
End of explanation |
545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Passive
Plots a passive learning curve w.r.t. ATLAS objects. Trained, tested on RGZ, split on compact/resolved. Testing on RGZ instead of Norris because we believe it to be reasonably accurate and it's also a lot bigger; if we want a good idea of how this curve levels out we really want to use as much data as possible. Splitting on compact/resolved because we expect compact to level out a lot faster (possibly very fast indeed).
Step1: Compact
Step2: Resolved | Python Code:
import astropy.io.ascii as asc, numpy, h5py, sklearn.linear_model, crowdastro.crowd.util, pickle, scipy.spatial
import matplotlib.pyplot as plt
%matplotlib inline
with open('/Users/alger/data/Crowdastro/sets_atlas.pkl', 'rb') as f:
atlas_sets = pickle.load(f)
atlas_sets_compact = atlas_sets['RGZ & compact']
atlas_sets_resolved = atlas_sets['RGZ & resolved']
with open('/Users/alger/data/Crowdastro/sets_swire.pkl', 'rb') as f:
swire_sets = pickle.load(f)
swire_sets_compact = swire_sets['RGZ & compact']
swire_sets_resolved = swire_sets['RGZ & resolved']
with h5py.File('/Users/alger/data/Crowdastro/swire.h5') as f:
swire_features = f['features'].value
with h5py.File('/Users/alger/data/Crowdastro/crowdastro-swire.h5') as f:
swire_names = [i.decode('ascii') for i in f['/swire/cdfs/string'].value]
swire_coords = f['/swire/cdfs/numeric'][:, :2]
swire_labels = {i['swire']: i['rgz_label'] for i in asc.read('/Users/alger/data/SWIRE/all_labels.csv')}
table = asc.read('/Users/alger/data/Crowdastro/one-table-to-rule-them-all.tbl')
swire_tree = scipy.spatial.KDTree(swire_coords)
Explanation: Passive
Plots a passive learning curve w.r.t. ATLAS objects. Trained, tested on RGZ, split on compact/resolved. Testing on RGZ instead of Norris because we believe it to be reasonably accurate and it's also a lot bigger; if we want a good idea of how this curve levels out we really want to use as much data as possible. Splitting on compact/resolved because we expect compact to level out a lot faster (possibly very fast indeed).
End of explanation
def test_on_atlas_sets(atlas_sets, swire_sets):
subset_sizes = numpy.logspace(numpy.log2(5),
numpy.log2(len(atlas_sets[0][0])),
base=2, num=10)
n_atlas = []
n_swire = []
bas = []
for (train, test), (_, test_swire) in zip(atlas_sets, swire_sets):
key_to_row = {}
for row in table:
key_to_row[row['Key']] = row
for subset_size in subset_sizes:
print(subset_size, end=' ')
# Subsample train.
subset_size = int(subset_size)
train_subset = list(train)
numpy.random.shuffle(train_subset)
train_subset = train_subset[:subset_size]
# Get coords.
ras = [key_to_row[k]['Component RA (Franzen)'] for k in train_subset]
decs = [key_to_row[k]['Component DEC (Franzen)'] for k in train_subset]
coords = list(zip(ras, decs))
# Find nearby SWIREs.
nearby = sorted({int(i) for i in numpy.concatenate(swire_tree.query_ball_point(coords, 1 / 60))})
# Train on the features.
features = swire_features[nearby]
labels = [swire_labels[swire_names[n]] == 'True' for n in nearby]
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced', C=1e10)
lr.fit(features, labels)
# Compute accuracy.
test_labels = [swire_labels[swire_names[n]] == 'True' for n in test_swire]
test_features = swire_features[test_swire]
acc = crowdastro.crowd.util.balanced_accuracy(test_labels, lr.predict(test_features))
n_atlas.append(int(subset_size))
n_swire.append(len(nearby))
bas.append(acc)
print()
return n_atlas, n_swire, bas
n_atlas, n_swire, bas = test_on_atlas_sets(atlas_sets_compact, swire_sets_compact)
plt.scatter(n_atlas, bas, alpha=0.7)
plt.title('Passive Learning Curve — Compact')
plt.xlabel('Number of radio objects')
plt.ylabel('Balanced accuracy')
plt.xscale('log')
Explanation: Compact
End of explanation
n_atlas_resolved, n_swire_resolved, bas_resolved = test_on_atlas_sets(atlas_sets_resolved, swire_sets_resolved)
plt.scatter(n_atlas_resolved, bas_resolved, alpha=0.7)
plt.title('Passive Learning Curve — Resolved')
plt.xlabel('Number of radio objects')
plt.ylabel('Balanced accuracy')
plt.xscale('log')
plt.scatter(n_atlas_resolved, numpy.array(bas_resolved) * 100, alpha=0.7, color='red', label='Resolved')
plt.scatter(n_atlas, numpy.array(bas) * 100, alpha=0.7, color='green', label='Compact')
plt.title('Accuracy against number of objects in training set')
plt.xlabel('Number of radio objects')
plt.ylabel('Balanced accuracy (%)')
plt.xscale('log')
plt.legend()
n_atlas_to_acc_compact = {n: [] for n in n_atlas}
for n, ba in zip(n_atlas, bas):
n_atlas_to_acc_compact[n].append(ba)
xs_compact = []
ys_compact = []
yerr_compact = []
for n in sorted(set(n_atlas)):
xs_compact.append(n)
ys_compact.append(numpy.mean(n_atlas_to_acc_compact[n]))
yerr_compact.append(numpy.std(n_atlas_to_acc_compact[n]))
xs_compact = numpy.array(xs_compact)
ys_compact = numpy.array(ys_compact)
yerr_compact = numpy.array(yerr_compact)
ylow_compact = ys_compact - yerr_compact
yhigh_compact = ys_compact + yerr_compact
n_atlas_to_acc_resolved = {n: [] for n in n_atlas_resolved}
for n, ba in zip(n_atlas_resolved, bas_resolved):
n_atlas_to_acc_resolved[n].append(ba)
xs_resolved = []
ys_resolved = []
yerr_resolved = []
for n in sorted(set(n_atlas_resolved)):
xs_resolved.append(n)
ys_resolved.append(numpy.mean(n_atlas_to_acc_resolved[n]))
yerr_resolved.append(numpy.std(n_atlas_to_acc_resolved[n]))
xs_resolved = numpy.array(xs_resolved)
ys_resolved = numpy.array(ys_resolved)
yerr_resolved = numpy.array(yerr_resolved)
ylow_resolved = ys_resolved - yerr_resolved
yhigh_resolved = ys_resolved + yerr_resolved
plt.plot(xs_compact, ys_compact, alpha=1, color='green', label='compact', marker='x')
plt.fill_between(xs_compact, ylow_compact, yhigh_compact, alpha=.2, color='green')
plt.plot(xs_resolved, ys_resolved, alpha=1, color='blue', label='resolved', marker='x')
plt.fill_between(xs_resolved, ylow_resolved, yhigh_resolved, alpha=.2, color='blue')
plt.title('Accuracy against number of objects in training set')
plt.xlabel('Number of radio objects')
plt.ylabel('Balanced accuracy (%)')
plt.xscale('log')
plt.legend()
plt.savefig('/Users/alger/repos/crowdastro-projects/ATLAS-CDFS/passive.pdf')
Explanation: Resolved
End of explanation |
546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-3', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: TEST-INSTITUTE-3
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimization
When using a Maximum Likelihood analysis we want to find the maximum of the likelihood $L(\vec{\theta})$ given one or more datasets (i.e., plugin instances) and one model containing one or more sources with free parameters $\vec{\theta}$. Most of the available algorithms for function optimization find the minimum, not the maximum, of a function. Also, since the likelihood function is usually the product of many probabilities, bounded to be $0 < p < 1$, $L(\vec{\theta})$ tend to be very small. Hence, it is much more tractable numerically to deal with the logarithm of the likelihood. Therefore, instead of finding the maximum of the likelihood $L$, we find the minimum of the $-\log{L(\vec{\theta})}$ function. Of course, the values of $\vec{\theta}$ minimizing $-\log{L}$ are the same that maximize $L$, i.e.
Step1: Let's get a JointLikelihood object like the one we would have in a normal 3ML analysis. We use a custom function, prepared for this tutorial, which gives a JointLikelihood object having a very simple model with one free parameter ($\mu$), and with a likelihood having a very simple shape
Step2: Now let's set up the Minuit minimizer and minimize the -log(L), starting from $\mu = 1$
Step3: Now let's do the same, but starting from $\mu=80$
Step4: and from $\mu=20$
Step5: It is clear that, depending on the starting point, minuit makes different steps trying to reach the minimum. In this last case, at one point Minuit overshoots the minimum jumping all the way from $\sim 30$ to $\sim 80$, then realizes the mistake and goes back.
In the case of a simple, convex likelihood like this one, Minuit finds easily the minimum independently of the starting point.
Global minimization
Now let us consider the case of a more complicated $-\log{L}$ function
Step6: This likelihood function has 3 minima
Step7: Minuit has found the local minimum, not the global one. Now we start from 80
Step8: Now we found the global minimum. This is a simple example to show that the solution find by a local minimizers can depend on the starting point, and might not be the global minimum. In practice, one can rarely be guaranteed that the likelihood function has only one minimum. This is especially true in many dimensions and in cases of data with poor statistic.
To alleviate this problem 3ML offers some "global minimizers". While it is impossible to guarantee that a global minimum will be reached, these minimizers are much more robust towards this kind of problems, at the expenses of a considerable longer runtime.
In 3ML each global minimizer must be associated with a local minimizer. The latter is used as final step to improve the solution found by the global minimizer and to compute the error matrix.
Grid minimizer
The idea behind this is very simple
Step9: The GRID minimizer has found the global minimum.
Of course the GRID minimizer can be used in multiple dimensions (simply define a grid for the other parameters as well). It is a simple brute force solution that works well in practice, especially when the likelihood function computation is not too time-consuming. When there are many parameters, you should choose carefully the parameters to use in the grid. For example, when looking for a spectral line in a spectrum, it makes sense to use the location of the line as parameter in the grid, but not its normalization.
PAGMO minimizer
The Pagmo minimizer is an open-source optimization suite provided by the European Space Agency
Step10: Multinest minimizer
MultiNest is a Bayesian inference tool which calculates the evidence and explores the parameter space which may contain multiple posterior modes and pronounced (curving) degeneracies in moderately high dimensions. It is not strictly a minimizer. However, given its capacity to explore multiple modes of the likelihood function (i.e., multiple local minima), it can be used as a global minimizer.
The Multinest minimizer in 3ML forms a posterior probability using the likelihood multiplied by uniformative priors. The priors are automatically chosen (uniform if the allowed parameter range is less than 2 orders of magnitudes or negative values are allowed, log-uniform otherwise). Then, Multinest is run in multimodal mode (multimodal=True). At the end of the run, among all the values of the $-\log{L}$ traversed by Multinest, the smaller one is chosen as starting point for the local minimizer. | Python Code:
from threeML import *
import matplotlib.pyplot as plt
%matplotlib inline
from threeML.minimizer.tutorial_material import *
Explanation: Minimization
When using a Maximum Likelihood analysis we want to find the maximum of the likelihood $L(\vec{\theta})$ given one or more datasets (i.e., plugin instances) and one model containing one or more sources with free parameters $\vec{\theta}$. Most of the available algorithms for function optimization find the minimum, not the maximum, of a function. Also, since the likelihood function is usually the product of many probabilities, bounded to be $0 < p < 1$, $L(\vec{\theta})$ tend to be very small. Hence, it is much more tractable numerically to deal with the logarithm of the likelihood. Therefore, instead of finding the maximum of the likelihood $L$, we find the minimum of the $-\log{L(\vec{\theta})}$ function. Of course, the values of $\vec{\theta}$ minimizing $-\log{L}$ are the same that maximize $L$, i.e.:
argmax${\vec{\theta}}~\left( L(\vec{\theta}) \right)$ = argmin${\vec{\theta}}~\left(-\log{L(\vec{\theta})}\right)$.
Various minimizers are available in 3ML. We can divide them in two groups: local minimizers and global minimizers.
Local minimizers
Most of the existing optimization algorithms are local minimizers (MINUIT, Levenberg–Marquardt, Netwton...).
A local minimizer starts from the current values for the free parameters $\vec{\theta}$ and try to reach the closest minimum of a function $f(\vec{\theta})$ (in 3ML this is usually the $-\log{L}$).
Many minimizers are based on the idea of gradient descent, i.e., they compute the local gradient of $f(\vec{\theta})$ and follow the function along the direction of steepest discent until the minimum. There are however also gradient-free algorithms, like for example COBYLA. While going into the details of how each algorithm works is beyond the scope, we illustrate here an example by using the Minuit algorithm.
Let's start by importing what we need in the following:
End of explanation
# This returns a JointLikelihood object with a simple likelihood function,
# and the corresponding Model instance. These objects are what you will have
# in a typical 3ML analysis. The Model contains one point source, named "test",
# with a spectrum called "simple"
jl, model = get_joint_likelihood_object_simple_likelihood()
# Let's look at the likelihood function, which in this illustrative example
# has a very simple shape
_ = plot_likelihood_function(jl)
Explanation: Let's get a JointLikelihood object like the one we would have in a normal 3ML analysis. We use a custom function, prepared for this tutorial, which gives a JointLikelihood object having a very simple model with one free parameter ($\mu$), and with a likelihood having a very simple shape:
End of explanation
model.test.spectrum.main.shape.mu = 1.0
# The minuit minimizer is the default, so no setup is necessary
# quiet = True means that no result will be printed
res = jl.fit(quiet=True)
# This plots the path that Minuit has traveled looking for the minimum
# Arrows connect the different points, starting from 1.0 and going
# to 40, the minimum
fig = plot_minimizer_path(jl)
Explanation: Now let's set up the Minuit minimizer and minimize the -log(L), starting from $\mu = 1$:
End of explanation
model.test.spectrum.main.shape.mu = 80.0
res = jl.fit(quiet=True)
fig = plot_minimizer_path(jl)
Explanation: Now let's do the same, but starting from $\mu=80$:
End of explanation
model.test.spectrum.main.shape.mu = 20.0
res = jl.fit(quiet=True)
fig = plot_minimizer_path(jl)
Explanation: and from $\mu=20$:
End of explanation
jl, model = get_joint_likelihood_object_complex_likelihood()
_ = plot_likelihood_function(jl)
Explanation: It is clear that, depending on the starting point, minuit makes different steps trying to reach the minimum. In this last case, at one point Minuit overshoots the minimum jumping all the way from $\sim 30$ to $\sim 80$, then realizes the mistake and goes back.
In the case of a simple, convex likelihood like this one, Minuit finds easily the minimum independently of the starting point.
Global minimization
Now let us consider the case of a more complicated $-\log{L}$ function:
End of explanation
model.test.spectrum.main.shape.mu = 1.0
res = jl.fit(quiet=True)
fig = plot_minimizer_path(jl)
Explanation: This likelihood function has 3 minima: 2 are local and one (at $\mu = 60$) is the global minimum. Let's see how Minuit performs in this case. First we start from 1.0:
End of explanation
model.test.spectrum.main.shape.mu = 70
res = jl.fit(quiet=True)
fig = plot_minimizer_path(jl)
Explanation: Minuit has found the local minimum, not the global one. Now we start from 80:
End of explanation
# Create an instance of the GRID minimizer
grid_minimizer = GlobalMinimization("grid")
# Create an instance of a local minimizer, which will be used by GRID
local_minimizer = LocalMinimization("minuit")
# Define a grid for mu as 10 steps between 1 and 80
my_grid = {model.test.spectrum.main.shape.mu: np.linspace(1, 80, 10)}
# Setup the global minimization
# NOTE: the "callbacks" option is useless in a normal 3ML analysis, it is
# here only to keep track of the evolution for the plot
grid_minimizer.setup(second_minimization=local_minimizer, grid = my_grid,
callbacks=[get_callback(jl)])
# Set the minimizer for the JointLikelihood object
jl.set_minimizer(grid_minimizer)
jl.fit()
fig = plot_minimizer_path(jl)
Explanation: Now we found the global minimum. This is a simple example to show that the solution find by a local minimizers can depend on the starting point, and might not be the global minimum. In practice, one can rarely be guaranteed that the likelihood function has only one minimum. This is especially true in many dimensions and in cases of data with poor statistic.
To alleviate this problem 3ML offers some "global minimizers". While it is impossible to guarantee that a global minimum will be reached, these minimizers are much more robust towards this kind of problems, at the expenses of a considerable longer runtime.
In 3ML each global minimizer must be associated with a local minimizer. The latter is used as final step to improve the solution found by the global minimizer and to compute the error matrix.
Grid minimizer
The idea behind this is very simple: the user defines a grid of values for the parameters, which are used as starting points for minimization performed by a local minimizers. At the end, the solution with the smallest value for $-\log{L}$ will be used as final solution.
For example, let's define a grid of 10 values for $\mu$. This means that 3ML will perform 10 local minimizations starting each time from a different point in the grid:
End of explanation
# Reset the parameter to a value different from the best fit found
# by previous algorithms
jl, model = get_joint_likelihood_object_complex_likelihood()
model.test.spectrum.main.shape.mu = 2.5
# Create an instance of the PAGMO minimizer
pagmo_minimizer = GlobalMinimization("pagmo")
# Select one of the many algorithms provided by pagmo
# (see https://esa.github.io/pagmo2/docs/algorithm_list.html
# for a list).
# In this case we use the Artificial Bee Colony algorithm
# (see here for a description: https://link.springer.com/article/10.1007/s10898-007-9149-x)
import pygmo
my_algorithm = pygmo.algorithm(pygmo.bee_colony(gen=20))
# Create an instance of a local minimizer
local_minimizer = LocalMinimization("minuit")
# Setup the global minimization
pagmo_minimizer.setup(second_minimization = local_minimizer, algorithm=my_algorithm,
islands=10, population_size=10, evolution_cycles=1)
# Set the minimizer for the JointLikelihood object
jl.set_minimizer(pagmo_minimizer)
jl.fit()
# NOTE: given the inner working of pygmo, it is not possible
# to plot the evolution
Explanation: The GRID minimizer has found the global minimum.
Of course the GRID minimizer can be used in multiple dimensions (simply define a grid for the other parameters as well). It is a simple brute force solution that works well in practice, especially when the likelihood function computation is not too time-consuming. When there are many parameters, you should choose carefully the parameters to use in the grid. For example, when looking for a spectral line in a spectrum, it makes sense to use the location of the line as parameter in the grid, but not its normalization.
PAGMO minimizer
The Pagmo minimizer is an open-source optimization suite provided by the European Space Agency:
https://esa.github.io/pagmo2/
It contains a lot of algorithms for optimization of different kinds:
https://esa.github.io/pagmo2/docs/algorithm_list.html
and it is very powerful. In order to be able to use it you need to install the python package pygmo (make sure to have version >= 2, as the old version 1.x has a different API and won't work with 3ML).
In Pagmo/pygmo, candidate solutions to the minimization are called "individuals". A population of individuals over which an algorithm acts to improve the solutions is called "island". An ensamble of islands that can share solutions along a defined topology and thus learn on their reciprocal progress is called "archipelago". The evolution of the populations can be executed more than once ("evolution cycles").
After the pygmo section of the optimization has been completed, the secondary minimizer will be used to further improve on the solution (if possible) and to compute the covariance matrix.
End of explanation
# Reset the parameter to a value different from the best fit found
# by previous algorithms
jl, model = get_joint_likelihood_object_complex_likelihood()
model.test.spectrum.main.shape.mu = 5.0
# Create an instance of the PAGMO minimizer
multinest_minimizer = GlobalMinimization("multinest")
# Create an instance of a local minimizer
local_minimizer = LocalMinimization("minuit")
# Setup the global minimization
multinest_minimizer.setup(second_minimization = local_minimizer, live_points=100)
# Set the minimizer for the JointLikelihood object
jl.set_minimizer(multinest_minimizer)
jl.fit()
# Plots the point traversed by Multinest
fig = plot_minimizer_path(jl, points=True)
Explanation: Multinest minimizer
MultiNest is a Bayesian inference tool which calculates the evidence and explores the parameter space which may contain multiple posterior modes and pronounced (curving) degeneracies in moderately high dimensions. It is not strictly a minimizer. However, given its capacity to explore multiple modes of the likelihood function (i.e., multiple local minima), it can be used as a global minimizer.
The Multinest minimizer in 3ML forms a posterior probability using the likelihood multiplied by uniformative priors. The priors are automatically chosen (uniform if the allowed parameter range is less than 2 orders of magnitudes or negative values are allowed, log-uniform otherwise). Then, Multinest is run in multimodal mode (multimodal=True). At the end of the run, among all the values of the $-\log{L}$ traversed by Multinest, the smaller one is chosen as starting point for the local minimizer.
End of explanation |
548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Largest product in a grid
Problem 11
In the 20 × 20 grid below, four numbers along a diagonal line have been marked in red.
08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08
49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00
81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65
52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91
22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80
24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50
32 98 81 28 64 23 67 10 <font color='red'>26</font> 38 40 67 59 54 70 66 18 38 64 70
67 26 20 68 02 62 12 20 95 <font color='red'>63</font> 94 39 63 08 40 91 66 49 94 21
24 55 58 05 66 73 99 26 97 17 <font color='red'>78</font> 78 96 83 14 88 34 89 63 72
21 36 23 09 75 00 76 44 20 45 35 <font color='red'>14</font> 00 61 33 97 34 31 33 95
78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92
16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57
86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58
19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40
04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66
88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69
04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36
20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16
20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54
01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48
The product of these numbers is $26 × 63 × 78 × 14 = 1788696$.
What is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20 × 20 grid?
Step1: Highly divisible triangular number
Problem 12
The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be $1+2+3+4+5+6+7=28$. The first ten terms would be
Step2: Large sum
Problem 13
Work out the first ten digits of the sum of the following one-hundred 50-digit numbers.
Step3: Longest Collatz sequence
Problem 14
The following iterative sequence is defined for the set of positive integers
Step4: Lattice paths
Problem 15
Starting in the top left corner of a 2×2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20×20 grid?
Step5: Power digit sum
Problem 16
$2^{15}=32768$ and the sum of its digits is 3+2+7+6+8=26.
What is the sum of the digits of the number $2^{1000}$?
Step6: Number letter counts
Problem 17
If the numbers 1 to 5 are written out in words
Step7: Maximum path sum I
Problem 18
By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, $3+7+4+9=23$.
Find the maximum total from top to bottom of the triangle below
Step8: Counting Sundays
Problem 19
You are given the following information, but you may prefer to do some research for yourself.
1 Jan 1900 was a Monday.
Thirty days has September,
April, June and November.
All the rest have thirty-one,
Saving February alone,
Which has twenty-eight, rain or shine.
And on leap years, twenty-nine.
A leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400.
How many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)?
Step9: Factorial digit sum
Problem 20
n! means n × (n − 1) × ... × 3 × 2 × 1
For example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
Find the sum of the digits in the number 100! | Python Code:
from euler import Seq, timer
import numpy as np
def p011():
table = np.loadtxt(open("data/p011.txt","rb"),delimiter=" ", dtype=np.int)
rows, columns = np.shape(table)
def collect(i,j,di,dj):
step = 4
acc = 1
while True:
if step==0:
return acc
elif (i<0) | (i>=rows) | (j<0) | (j>=columns):
return 0
else:
acc *= table[i,j]
step -= 1
i += di
j += dj
def goRight(i,j): return collect(i,j,0,1)
def goDown(i,j): return collect(i,j,1,0)
def goDiag1(i,j): return collect(i,j,1,1)
def goDiag2(i,j): return collect(i,j,1,-1)
return (
[[goRight(i,j), goDown(i,j), goDiag1(i,j), goDiag2(i,j)]
for i in range(1,rows+1)
for j in range(1,columns+1)]
>> Seq.flatten
>> Seq.max)
timer(p011)
Explanation: Largest product in a grid
Problem 11
In the 20 × 20 grid below, four numbers along a diagonal line have been marked in red.
08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08
49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00
81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65
52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91
22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80
24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50
32 98 81 28 64 23 67 10 <font color='red'>26</font> 38 40 67 59 54 70 66 18 38 64 70
67 26 20 68 02 62 12 20 95 <font color='red'>63</font> 94 39 63 08 40 91 66 49 94 21
24 55 58 05 66 73 99 26 97 17 <font color='red'>78</font> 78 96 83 14 88 34 89 63 72
21 36 23 09 75 00 76 44 20 45 35 <font color='red'>14</font> 00 61 33 97 34 31 33 95
78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92
16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57
86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58
19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40
04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66
88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69
04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36
20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16
20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54
01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48
The product of these numbers is $26 × 63 × 78 × 14 = 1788696$.
What is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20 × 20 grid?
End of explanation
from euler import Seq, DivisorSigma, timer
def p012():
return (
Seq.unfold(lambda (n,m): (n+m, (n+1, m+n)), (1,0))
>> Seq.find(lambda n: DivisorSigma(n) > 500))
timer(p012)
Explanation: Highly divisible triangular number
Problem 12
The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be $1+2+3+4+5+6+7=28$. The first ten terms would be:
$1,3,6,10,15,21,28,36,45,55,...$
Let us list the factors of the first seven triangle numbers:
$1: 1$
$3: 1,3$
$6: 1,2,3,6$
$10: 1,2,5,10$
$15: 1,3,5,15$
$21: 1,3,7,21$
$28: 1,2,4,7,14,28$
We can see that 28 is the first triangle number to have over five divisors.
What is the value of the first triangle number to have over five hundred divisors?
End of explanation
from euler import Seq, timer
def p013():
return int(
str(open('data/p013.txt').read().splitlines()
>> Seq.map(long)
>> Seq.sum)[0:10])
timer(p013)
Explanation: Large sum
Problem 13
Work out the first ten digits of the sum of the following one-hundred 50-digit numbers.
End of explanation
from euler import memoize, snd, timer
@memoize
def collatz(n):
if n == 1:
x = 1
elif n%2 == 0:
x = 1 + collatz(int(n/2))
else:
x = 1 + collatz(int(3*n+1))
return x
def p014():
return max([(i, collatz(i)) for i in range(1,1000000)], key=snd)[1]
timer(p014)
Explanation: Longest Collatz sequence
Problem 14
The following iterative sequence is defined for the set of positive integers:
n → n/2 (n is even)
n → 3n + 1 (n is odd)
Using the rule above and starting with 13, we generate the following sequence:
13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1
It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.
Which starting number, under one million, produces the longest chain?
NOTE: Once the chain starts the terms are allowed to go above one million.
End of explanation
from euler import timer
from math import factorial
def p015():
return factorial(40) / factorial(20) / factorial(20)
timer(p015)
Explanation: Lattice paths
Problem 15
Starting in the top left corner of a 2×2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20×20 grid?
End of explanation
from euler import timer
def p016():
return sum(int(n) for n in str(2L ** 1000))
timer(p016)
Explanation: Power digit sum
Problem 16
$2^{15}=32768$ and the sum of its digits is 3+2+7+6+8=26.
What is the sum of the digits of the number $2^{1000}$?
End of explanation
from euler import timer
read_basics = {0: 0, 1: 3, 2: 3, 3: 5,
4: 4, 5: 4, 6: 3, 7: 5,
8: 5, 9: 4, 10: 3, 11: 6,
12: 6, 13: 8, 14: 8, 15: 7,
16: 7, 17: 9, 18: 8, 19: 8,
20: 6, 30: 6, 40: 5, 50: 5,
60: 5, 70: 7, 80: 6, 90: 6}
def read_length(x):
if x==1000:
return 3+8
elif x<=20:
return read_basics[x]
elif x<100:
ten = x/10 * 10
last = x%10
return read_basics[ten] + read_basics[last]
else:
hund = x/100
if x%100==0:
return read_basics[hund] + 7
else:
return read_basics[hund] + 7 + 3 + read_length(x%100)
def p017():
return sum(read_length(i) for i in range(1,1001))
timer(p017)
Explanation: Number letter counts
Problem 17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are $3+3+5+4+4=19$ letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
End of explanation
from euler import Seq, timer
def p018():
return (
open('data/p018.txt').read().splitlines()
>> Seq.map(lambda s: s.split(' ') >> Seq.map(int))
>> Seq.rev
>> Seq.reduce(lambda a,b: a
>> Seq.window(2)
>> Seq.map(max)
>> Seq.zip(b)
>> Seq.map(sum))
>> Seq.head)
timer(p018)
Explanation: Maximum path sum I
Problem 18
By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, $3+7+4+9=23$.
Find the maximum total from top to bottom of the triangle below:
75
95 64
17 47 82
18 35 87 10
20 04 82 47 65
19 01 23 75 03 34
88 02 77 73 07 63 67
99 65 04 28 06 16 70 92
41 41 26 56 83 40 80 70 33
41 48 72 33 47 32 37 16 94 29
53 71 44 65 25 43 91 52 97 51 14
70 11 33 28 77 73 17 78 39 68 17 57
91 71 52 38 17 14 91 43 58 50 27 29 48
63 66 04 68 89 53 67 30 73 16 69 87 40 31
04 62 98 27 23 09 70 98 73 93 38 53 60 04 23
NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o)
End of explanation
from datetime import date, timedelta
from euler import Seq, timer
def p019():
beg_dt = date(1901,1,1)
end_dt = date(2000,12,31)
return (
Seq.init((end_dt - beg_dt).days + 1, lambda x: beg_dt + timedelta(x))
>> Seq.filter(lambda x: (x.day == 1) & (x.weekday() == 1))
>> Seq.length)
timer(p019)
Explanation: Counting Sundays
Problem 19
You are given the following information, but you may prefer to do some research for yourself.
1 Jan 1900 was a Monday.
Thirty days has September,
April, June and November.
All the rest have thirty-one,
Saving February alone,
Which has twenty-eight, rain or shine.
And on leap years, twenty-nine.
A leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400.
How many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)?
End of explanation
from math import factorial
from euler import Seq, timer
def p020():
return str(factorial(100)) >> Seq.map(int) >> Seq.sum
timer(p020)
Explanation: Factorial digit sum
Problem 20
n! means n × (n − 1) × ... × 3 × 2 × 1
For example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
Find the sum of the digits in the number 100!
End of explanation |
549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SEER Data Analysis
Phase 3
Step1: To begin exploring the data we took a sample of the SEER data, defined the features and dependent variable, printed the top few lines to ensure a successful data ingest, and ran descriptive statistics
Step2: Next we checked our data type and determine the frequency of each class
Step3: We used a histogram to see the distribution of survival time in months
Step4: Next we played around with a few visualization to get a better understanding of the data
Step5: The plot below shows a lot of overlap between the 3 classes which alludes to the fact that classification models may not perform great. However, the plot also shows a more clear classification along the birth year and age at diagnosis features.
Step6: Next we moved to creating survival charts using a larger sample size. We created a class with a "plot_survival" function. For the graph we picked variables that the scientific literature finds significant-- Stage, ER status, PR status, age, and radiation treatment. The second plot compares the frequency of survival for censored and non-censored patients. | Python Code:
%matplotlib inline
import os
import time
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from MasterSeer import MasterSeer
from sklearn.feature_selection import SelectPercentile, f_classif, SelectFromModel
from sklearn.linear_model import LinearRegression
from lifelines.plotting import plot_lifetimes
from lifelines import KaplanMeierFitter
from numpy.random import uniform, exponential
from pandas.tools.plotting import scatter_matrix, radviz, parallel_coordinates
Explanation: SEER Data Analysis
Phase 3: Data Exploration
End of explanation
FEATURES = [
"Birth Year",
"Age at Diagnosis",
"Race",
"Origin",
"laterality",
"Radiation",
"Histrec",
"ER Status",
"PR Status",
"Behanal",
"Stage",
"Numprimes",
"Survival Time",
"Bucket"
]
LABEL_MAP = {
0: "< 60 Months",
1: "60 < months > 120",
2: "> 120 months",
}
# Read the data into a DataFrame
df = pd.read_csv("clean1.csv", sep=',' , header=0, names=FEATURES)
# Convert class labels into text
for k,v in LABEL_MAP.items():
df.ix[df.Bucket == k, 'Bucket'] = v
print(df.head(n=5))
df.describe()
Explanation: To begin exploring the data we took a sample of the SEER data, defined the features and dependent variable, printed the top few lines to ensure a successful data ingest, and ran descriptive statistics
End of explanation
print (df.groupby('Bucket')['Bucket'].count())
Explanation: Next we checked our data type and determine the frequency of each class
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(df['Survival Time'], bins = 10, range = (df['Survival Time'].min(),df['Survival Time'].max()))
plt.title('Survival Time Distribution')
plt.xlabel('Survival Time')
plt.ylabel('Months')
plt.show()
Explanation: We used a histogram to see the distribution of survival time in months
End of explanation
scatter_matrix(df, alpha=0.2, figsize=(12, 12), diagonal='kde')
plt.show()
plt.figure(figsize=(12,12))
parallel_coordinates(df, 'Bucket')
plt.show()
Explanation: Next we played around with a few visualization to get a better understanding of the data
End of explanation
plt.figure(figsize=(12,12))
radviz(df, 'Bucket')
plt.show()
Explanation: The plot below shows a lot of overlap between the 3 classes which alludes to the fact that classification models may not perform great. However, the plot also shows a more clear classification along the birth year and age at diagnosis features.
End of explanation
class ExploreSeer(MasterSeer):
def __init__(self, path=r'./data/', testMode=False, verbose=True, sample_size=5000):
# user supplied parameters
self.testMode = testMode # import one file, 500 records and return
self.verbose = verbose # prints status messages
self.sample_size = sample_size # number of rows to pull for testing
if type(path) != str:
raise TypeError('path must be a string')
if path[-1] != '/':
path += '/' # if path does not end with a backslash, add one
self.path = path
# open connection to the database
super().__init__(path, False, verbose=verbose)
self.db_conn, self.db_cur = super().init_database(False)
def __del__(self):
super().__del__()
def plot_survival(self):
df = super().load_data(col = ['YR_BRTH','AGE_DX','LATERAL','RADIATN','HISTREC','ERSTATUS',
'PRSTATUS','BEHANAL','HST_STGA','NUMPRIMS', 'SRV_TIME_MON',
'SRV_TIME_MON_PA', 'DTH_CLASS', 'O_DTH_CLASS', 'STAT_REC'],
cond = 'SRV_TIME_MON < 1000 AND HST_STGA < 8 AND DTH_CLASS < 9 AND ERSTATUS < 4 AND PRSTATUS < 4',
sample_size = 100000)
kmf = KaplanMeierFitter()
try:
df.RADIATN = df.RADIATN.replace(7, 0)
df = df[df.RADIATN < 7]
except Exception as err:
pass
# 0-negative, 1-borderline,, 2-positive
df = df[df.ERSTATUS != 4]
df = df[df.ERSTATUS != 9]
df.ERSTATUS = df.ERSTATUS.replace(2, 0)
df.ERSTATUS = df.ERSTATUS.replace(1, 2)
df.ERSTATUS = df.ERSTATUS.replace(3, 1)
# 0-negative, 1-borderline,, 2-positive
df = df[df.PRSTATUS != 4]
df = df[df.PRSTATUS != 9]
df.PRSTATUS = df.PRSTATUS.replace(2, 0)
df.PRSTATUS = df.PRSTATUS.replace(1, 2)
df.PRSTATUS = df.PRSTATUS.replace(3, 1)
rad = df.RADIATN > 0
er = df.ERSTATUS > 0
pr = df.PRSTATUS > 0
st0 = df.HST_STGA == 0
st1 = df.HST_STGA == 1
st2 = df.HST_STGA == 2
st4 = df.HST_STGA == 4
age = df.AGE_DX < 50
df['SRV_TIME_YR'] = df['SRV_TIME_MON'] / 12
T = df['SRV_TIME_YR']
#C = (np.logical_or(df.DTH_CLASS == 1, df.O_DTH_CLASS == 1))
C = df.STAT_REC == 4
f, ax = plt.subplots(5, sharex=True, sharey=True)
ax[0].set_title("Lifespans of cancer patients");
# radiation
kmf.fit(T[rad], event_observed=C[rad], label="Radiation")
kmf.plot(ax=ax[0]) #, ci_force_lines=True)
kmf.fit(T[~rad], event_observed=C[~rad], label="No Radiation")
kmf.plot(ax=ax[0]) #, ci_force_lines=True)
# ER Status
kmf.fit(T[er], event_observed=C[er], label="ER Positive")
kmf.plot(ax=ax[1]) #, ci_force_lines=True)
kmf.fit(T[~er], event_observed=C[~er], label="ER Negative")
kmf.plot(ax=ax[1]) #, ci_force_lines=True)
# PR Status
kmf.fit(T[pr], event_observed=C[pr], label="PR Positive")
kmf.plot(ax=ax[2]) #, ci_force_lines=True)
kmf.fit(T[~pr], event_observed=C[~pr], label="PR Negative")
kmf.plot(ax=ax[2]) #, ci_force_lines=True)
# stage
kmf.fit(T[st0], event_observed=C[st0], label="Stage 0")
kmf.plot(ax=ax[3]) #, ci_force_lines=True)
kmf.fit(T[st1], event_observed=C[st1], label="Stage 1")
kmf.plot(ax=ax[3]) #, ci_force_lines=True)
kmf.fit(T[st2], event_observed=C[st2], label="Stage 2")
kmf.plot(ax=ax[3]) #, ci_force_lines=True)
kmf.fit(T[st4], event_observed=C[st4], label="Stage 4")
kmf.plot(ax=ax[3]) #, ci_force_lines=True)
# age
kmf.fit(T[age], event_observed=C[age], label="Age < 50")
kmf.plot(ax=ax[4]) #, ci_force_lines=True)
kmf.fit(T[~age], event_observed=C[~age], label="Age >= 50")
kmf.plot(ax=ax[4]) #, ci_force_lines=True)
ax[0].legend(loc=3,prop={'size':10})
ax[1].legend(loc=3,prop={'size':10})
ax[2].legend(loc=3,prop={'size':10})
ax[3].legend(loc=3,prop={'size':10})
ax[4].legend(loc=3,prop={'size':10})
ax[len(ax)-1].set_xlabel('Survival in years')
f.text(0.04, 0.5, 'Survival %', va='center', rotation='vertical')
plt.tight_layout()
plt.ylim(0,1);
plt.show()
f, ax = plt.subplots(2, sharex=True, sharey=True)
df.hist('SRV_TIME_YR', by=df.STAT_REC != 4, ax=(ax[0], ax[1]))
ax[0].set_title('Histogram of Non Censored Patients')
ax[0].set_ylabel('Number of Patients')
ax[1].set_ylabel('Number of Patients')
ax[1].set_title('Histogram of Censored Patients')
ax[1].set_xlabel('Survival in Years')
plt.show()
return
# second plot of survival
fig, ax = plt.subplots(figsize=(8, 6))
cen = df[df.STAT_REC != 4].SRV_TIME_MON
nc = df[df.STAT_REC == 4].SRV_TIME_MON
cen = cen.sort_values()
nc = nc.sort_values()
ax.hlines([x for x in range(len(nc))] , 0, nc , color = 'b', label='Uncensored');
ax.hlines([x for x in range(len(nc), len(nc)+len(cen))], 0, cen, color = 'r', label='Censored');
ax.set_xlim(left=0);
ax.set_xlabel('Months');
ax.set_ylim(-0.25, len(df) + 0.25);
ax.legend(loc='best');
plt.show()
return
seer = ExploreSeer(sample_size=10000)
seer.plot_survival()
Explanation: Next we moved to creating survival charts using a larger sample size. We created a class with a "plot_survival" function. For the graph we picked variables that the scientific literature finds significant-- Stage, ER status, PR status, age, and radiation treatment. The second plot compares the frequency of survival for censored and non-censored patients.
End of explanation |
550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Evoked data structure
Step1: Creating Evoked objects from Epochs
Step2: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the
Step3: Like the plot() methods for
Step4: To select based on time in seconds, the
Step5: Similarities among the core data structures
Step6: Notice that
Step7: If you want to load only some of the conditions present in a .fif file,
Step8: Above, when we created an
Step9: This can be remedied by either passing a baseline parameter to
Step10: Notice that
Step11: This approach will weight each epoch equally and create a single
Step12: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use
Step13: Keeping track of nave is important for inverse imaging, because it is
used to scale the noise covariance estimate (which in turn affects the
magnitude of estimated source activity). See minimum_norm_estimates
for more information (especially the whitening_and_scaling section).
For this reason, combining | Python Code:
import os
import mne
Explanation: The Evoked data structure: evoked/averaged data
This tutorial covers the basics of creating and working with :term:evoked
data. It introduces the :class:~mne.Evoked data structure in detail,
including how to load, query, subselect, export, and plot data from an
:class:~mne.Evoked object. For info on creating an :class:~mne.Evoked
object from (possibly simulated) data in a :class:NumPy array
<numpy.ndarray>, see tut_creating_data_structures.
:depth: 2
As usual we'll start by importing the modules we need:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
# we'll skip the "face" and "buttonpress" conditions, to save memory:
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
evoked = epochs['auditory/left'].average()
del raw # reduce memory usage
Explanation: Creating Evoked objects from Epochs
:class:~mne.Evoked objects typically store an EEG or MEG signal that has
been averaged over multiple :term:epochs, which is a common technique for
estimating stimulus-evoked activity. The data in an :class:~mne.Evoked
object are stored in an :class:array <numpy.ndarray> of shape
(n_channels, n_times) (in contrast to an :class:~mne.Epochs object,
which stores data of shape (n_epochs, n_channels, n_times)). Thus to
create an :class:~mne.Evoked object, we'll start by epoching some raw data,
and then averaging together all the epochs from one condition:
End of explanation
evoked.plot()
Explanation: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the :meth:~mne.Evoked.plot method, which yields a butterfly plot of each
channel type:
End of explanation
print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints
Explanation: Like the plot() methods for :meth:Raw <mne.io.Raw.plot> and
:meth:Epochs <mne.Epochs.plot> objects,
:meth:evoked.plot() <mne.Evoked.plot> has many parameters for customizing
the plot output, such as color-coding channel traces by scalp location, or
plotting the :term:global field power <GFP> alongside the channel traces.
See tut-visualize-evoked for more information about visualizing
:class:~mne.Evoked objects.
Subselecting Evoked data
.. sidebar:: Evokeds are not memory-mapped
:class:~mne.Evoked objects use a :attr:~mne.Evoked.data attribute
rather than a :meth:~mne.Epochs.get_data method; this reflects the fact
that the data in :class:~mne.Evoked objects are always loaded into
memory, never memory-mapped_ from their location on disk (because they
are typically much smaller than :class:~mne.io.Raw or
:class:~mne.Epochs objects).
Unlike :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects do not support selection by square-bracket
indexing. Instead, data can be subselected by indexing the
:attr:~mne.Evoked.data attribute:
End of explanation
evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True)
print(evoked_eeg.ch_names)
new_order = ['EEG 002', 'MEG 2521', 'EEG 003']
evoked_subset = evoked.copy().reorder_channels(new_order)
print(evoked_subset.ch_names)
Explanation: To select based on time in seconds, the :meth:~mne.Evoked.time_as_index
method can be useful, although beware that depending on the sampling
frequency, the number of samples in a span of given duration may not always
be the same (see the time-as-index section of the
tutorial about Raw data <tut-raw-class> for details).
Selecting, dropping, and reordering channels
By default, when creating :class:~mne.Evoked data from an
:class:~mne.Epochs object, only the "data" channels will be retained:
eog, ecg, stim, and misc channel types will be dropped. You
can control which channel types are retained via the picks parameter of
:meth:epochs.average() <mne.Epochs.average>, by passing 'all' to
retain all channels, or by passing a list of integers, channel names, or
channel types. See the documentation of :meth:~mne.Epochs.average for
details.
If you've already created the :class:~mne.Evoked object, you can use the
:meth:~mne.Evoked.pick, :meth:~mne.Evoked.pick_channels,
:meth:~mne.Evoked.pick_types, and :meth:~mne.Evoked.drop_channels methods
to modify which channels are included in an :class:~mne.Evoked object.
You can also use :meth:~mne.Evoked.reorder_channels for this purpose; any
channel names not provided to :meth:~mne.Evoked.reorder_channels will be
dropped. Note that channel selection methods modify the object in-place, so
in interactive/exploratory sessions you may want to create a
:meth:~mne.Evoked.copy first.
End of explanation
sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-ave.fif')
evokeds_list = mne.read_evokeds(sample_data_evk_file, verbose=False)
print(evokeds_list)
print(type(evokeds_list))
Explanation: Similarities among the core data structures
:class:~mne.Evoked objects have many similarities with :class:~mne.io.Raw
and :class:~mne.Epochs objects, including:
They can be loaded from and saved to disk in .fif format, and their
data can be exported to a :class:NumPy array <numpy.ndarray> (but through
the :attr:~mne.Evoked.data attribute, not through a get_data()
method). :class:Pandas DataFrame <pandas.DataFrame> export is also
available through the :meth:~mne.Evoked.to_data_frame method.
You can change the name or type of a channel using
:meth:evoked.rename_channels() <mne.Evoked.rename_channels> or
:meth:evoked.set_channel_types() <mne.Evoked.set_channel_types>.
Both methods take :class:dictionaries <dict> where the keys are existing
channel names, and the values are the new name (or type) for that channel.
Existing channels that are not in the dictionary will be unchanged.
:term:SSP projector <projector> manipulation is possible through
:meth:~mne.Evoked.add_proj, :meth:~mne.Evoked.del_proj, and
:meth:~mne.Evoked.plot_projs_topomap methods, and the
:attr:~mne.Evoked.proj attribute. See tut-artifact-ssp for more
information on SSP.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have :meth:~mne.Evoked.copy,
:meth:~mne.Evoked.crop, :meth:~mne.Evoked.time_as_index,
:meth:~mne.Evoked.filter, and :meth:~mne.Evoked.resample methods.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have evoked.times,
:attr:evoked.ch_names <mne.Evoked.ch_names>, and :class:info <mne.Info>
attributes.
Loading and saving Evoked data
Single :class:~mne.Evoked objects can be saved to disk with the
:meth:evoked.save() <mne.Evoked.save> method. One difference between
:class:~mne.Evoked objects and the other data structures is that multiple
:class:~mne.Evoked objects can be saved into a single .fif file, using
:func:mne.write_evokeds. The example data <sample-dataset>
includes just such a .fif file: the data have already been epoched and
averaged, and the file contains separate :class:~mne.Evoked objects for
each experimental condition:
End of explanation
for evok in evokeds_list:
print(evok.comment)
Explanation: Notice that :func:mne.read_evokeds returned a :class:list of
:class:~mne.Evoked objects, and each one has an evoked.comment
attribute describing the experimental condition that was averaged to
generate the estimate:
End of explanation
right_vis = mne.read_evokeds(sample_data_evk_file, condition='Right visual')
print(right_vis)
print(type(right_vis))
Explanation: If you want to load only some of the conditions present in a .fif file,
:func:~mne.read_evokeds has a condition parameter, which takes either a
string (matched against the comment attribute of the evoked objects on disk),
or an integer selecting the :class:~mne.Evoked object based on the order
it's stored in the file. Passing lists of integers or strings is also
possible. If only one object is selected, the :class:~mne.Evoked object
will be returned directly (rather than a length-one list containing it):
End of explanation
evokeds_list[0].plot(picks='eeg')
Explanation: Above, when we created an :class:~mne.Evoked object by averaging epochs,
baseline correction was applied by default when we extracted epochs from the
class:~mne.io.Raw object (the default baseline period is (None, 0),
which assured zero mean for times before the stimulus event). In contrast, if
we plot the first :class:~mne.Evoked object in the list that was loaded
from disk, we'll see that the data have not been baseline-corrected:
End of explanation
evokeds_list[0].apply_baseline((None, 0))
evokeds_list[0].plot(picks='eeg')
Explanation: This can be remedied by either passing a baseline parameter to
:func:mne.read_evokeds, or by applying baseline correction after loading,
as shown here:
End of explanation
left_right_aud = epochs['auditory'].average()
print(left_right_aud)
Explanation: Notice that :meth:~mne.Evoked.apply_baseline operated in-place. Similarly,
:class:~mne.Evoked objects may have been saved to disk with or without
:term:projectors <projector> applied; you can pass proj=True to the
:func:~mne.read_evokeds function, or use the :meth:~mne.Evoked.apply_proj
method after loading.
Combining Evoked objects
One way to pool data across multiple conditions when estimating evoked
responses is to do so prior to averaging (recall that MNE-Python can select
based on partial matching of /-separated epoch labels; see
tut-section-subselect-epochs for more info):
End of explanation
left_aud = epochs['auditory/left'].average()
right_aud = epochs['auditory/right'].average()
print([evok.nave for evok in (left_aud, right_aud)])
Explanation: This approach will weight each epoch equally and create a single
:class:~mne.Evoked object. Notice that the printed representation includes
(average, N=145), indicating that the :class:~mne.Evoked object was
created by averaging across 145 epochs. In this case, the event types were
fairly close in number:
End of explanation
left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave')
assert left_right_aud.nave == left_aud.nave + right_aud.nave
Explanation: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use :meth:~mne.Epochs.equalize_event_counts prior to averaging.
Another approach to pooling across conditions is to create separate
:class:~mne.Evoked objects for each condition, and combine them afterward.
This can be accomplished by the function :func:mne.combine_evoked, which
computes a weighted sum of the :class:~mne.Evoked objects given to it. The
weights can be manually specified as a list or array of float values, or can
be specified using the keyword 'equal' (weight each :class:~mne.Evoked
object by $\frac{1}{N}$, where $N$ is the number of
:class:~mne.Evoked objects given) or the keyword 'nave' (weight each
:class:~mne.Evoked object by the number of epochs that were averaged
together to create it):
End of explanation
for ix, trial in enumerate(epochs[:3].iter_evoked()):
channel, latency, value = trial.get_peak(ch_type='eeg',
return_amplitude=True)
latency = int(round(latency * 1e3)) # convert to milliseconds
value = int(round(value * 1e6)) # convert to µV
print('Trial {}: peak of {} µV at {} ms in channel {}'
.format(ix, value, latency, channel))
Explanation: Keeping track of nave is important for inverse imaging, because it is
used to scale the noise covariance estimate (which in turn affects the
magnitude of estimated source activity). See minimum_norm_estimates
for more information (especially the whitening_and_scaling section).
For this reason, combining :class:~mne.Evoked objects with either
weights='equal' or by providing custom numeric weights should usually
not be done if you intend to perform inverse imaging on the resulting
:class:~mne.Evoked object.
Other uses of Evoked objects
Although the most common use of :class:~mne.Evoked objects is to store
averages of epoched data, there are a couple other uses worth noting here.
First, the method :meth:epochs.standard_error() <mne.Epochs.standard_error>
will create an :class:~mne.Evoked object (just like
:meth:epochs.average() <mne.Epochs.average> does), but the data in the
:class:~mne.Evoked object will be the standard error across epochs instead
of the average. To indicate this difference, :class:~mne.Evoked objects
have a :attr:~mne.Evoked.kind attribute that takes values 'average' or
'standard error' as appropriate.
Another use of :class:~mne.Evoked objects is to represent a single trial
or epoch of data, usually when looping through epochs. This can be easily
accomplished with the :meth:epochs.iter_evoked() <mne.Epochs.iter_evoked>
method, and can be useful for applications where you want to do something
that is only possible for :class:~mne.Evoked objects. For example, here
we use the :meth:~mne.Evoked.get_peak method (which isn't available for
:class:~mne.Epochs objects) to get the peak response in each trial:
End of explanation |
551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align
Step1: 受限于沙盒中数据限制,本节示例的相关性分析只限制在abupy内置沙盒数据中,首先将内置沙盒中美股,A股,港股, 比特币,莱特币,期货市场中的symbol都列出来
Step2: 如上所示abupy内置沙盒中的symbol共75个,下面组装这75个symbol形成一个三维panel数据,摘取所有数据的p_change(涨跌幅)列形成二维dataframe对象,如下所示:
Step3: 1. 相关相似度的度量
机器学习与比特币示例章节通过比特币的数据特征组成了学习数据,通过训练数据来决策是否有大波动,对于机器学习来说获取更多特征,更多数据始终是优化的最重要且最有效的手段,如果你需要从其它市场找到和比特币有相关性的品种,变换形成新的特征,最简单的方法可以使用corr_xy接口:
Step4: 上面通过默认参数获取601766与比特币的相关度,数值为-0.04,可以通过关键子参数similar_type来切换相关的计算方法,如下使用E_CORE_TYPE_SPERM:
Step5: 下面使用E_CORE_TYPE_SIGN,sign的相关度度量,只关注符号,不关心具体数据,即比如今天比特币涨,那只要601766今天也涨就正相关,不会理会具体涨多少,实际上这种度量方式更适合交易类型产品:
Step6: 下面使用时间加权相关E_CORE_TYPE_ROLLING,进行计算,即时间上越早的相关值所占权重越小,时间上越晚的相关值权重越大:
Step7: 上面的corr_xy的接口参数只能是两个品种的相关度度量,如果想要从上面net_cg_df的75种品种中找到与比特币最相关的品种很麻烦,使用corr_matrix接口可以一次输出计算:
Step8: corr_matrix接口同样支持关键字参数similar_type,如下使用E_CORE_TYPE_SIGN:
Step9: 上面输出的都是正相关的top10,发现除了ltc(莱特币)外,其它的相关度也都在0附近,都没有超过0.1,即都基本无关,实际上不管是正相关还是负相关都成为可以有用的素材,下面看一下时间加权相关的top10负相关的品种:
Step10: 如上所示负相关的数值有几个已经上到0.1以上了,但是由于沙盒数据中的交易品种所限,并没有找到适合构成特征(强正相关,或者强负相关)的品种。
备注:之后的章节在使用非沙盒数据的前提下,会编写在完整的各个全市场中寻找与比特币最强正相关,最强负相关的示例,以及通过它们构成特征,实现策略的优化
2. 距离的度量与相似度
与相似度度量相反的是两个矢量之间距离的度量,如下示例在abupy中的距离度量接口使用:
Step11: 上面的接口度量了比特币和莱特币之间的欧式距离(L2范数),下面度量曼哈顿距离(L1范数):
Step12: 下面度量余弦距离:
Step13: 上面接口cosine_distances_xy度量了比特币和莱特币之间的余弦距离,0.23为距离的度量值,距离和相似度之间可以通过关键字参数to_similar=True将余弦距离直接转换为余弦相似度,如下所示:
Step14: 和相似度接口类似,xy接口只能两个直接度量,通过matrix接口可实现矩阵的度量,如下所示:
Step15: 可以看到与比特币距离最短的是莱特币,同时可以通to_similar=True将距离度量值转换为相似度,如下所示:
Step16: 上面度量了欧式距离(L2范数),下面度量曼哈顿距离(L1范数)的matrix接口:
Step17: 上面度量结果的中位数,值为0.37,很高,因为L1范数和L2范数针对相似度的度量只是相对的,只在数据范围内,数据之间进行数据比较统计的意义不大,如上ltc的值和WHO差不多大,余弦距离与它们不同,如下示例,可以看到ltc与usTSLA数值差别很大:
Step18: 备注:与上述接口的使用类似,通过ABuScalerUtil.scaler_xy针对两组矢量进行标准化,通过ABuScalerUtil.scaler_matrix针对矩阵进行标准化。
3. 相似相关接口的应用
下面示例在abupy中相关相似上层封装的接口的使用,如下通过将市场设置为通过E_MARKET_TARGET_CN,获取A股全市场数据与600036进行对比,可视化最相关的top10个:
备注:由于使用的是沙盒数据,所以市场中本身数据就只有几个,如果在非沙盒环境下,将从市场中几千个symbol中度量相关性:
Step19: 上面的接口使用的是find_similar_with_cnt,参数252为天数,即度量最近一年的相关性,下面切换市场为港股市场,接口使用find_similar_with_se参数通过start,end进行设置时间,关键字参数corr_type代表度量相关算法,使用查看hk02333与港股市场的相关性:
Step20: 下面的市场继续在港股市场,但参数symbol使用比特币,即度量的结果为比特币与港股市场的相关性:
Step21: 下面的市场继续在港股市场,但接口使用find_similar_with_folds,参数参数n_folds代表年数,symbol使用比特币,corr_type使用时间加权相关度量,如下所示:
Step22: 上面的接口通过corr_type来切好相关度度量算法,如果想使用多种度量算法,同时度量,可以使用calc_similar接口,使用如下所示:
Step23: 上面使用E_CORE_TYPE_PEARS和E_CORE_TYPE_SPERM共同做为度量算法,下面使用+-号相关和时间加权相关,如下所示:
Step24: 观察上面的输出值都很高,通过corr_xy度量两支股票的相关度只有0.458,如下所示:
Step25: 实际上ABuTLSimilar.calc_similar()度量的是两支股票相对整个市场的相关性评级,它不关心某一个股票具体相关性的数值的大小,calc_similar(a, b) 的工作流程如下:
计算a与市场中所有股票的相关性
将所有相关性进行rank排序
查询股票b在rank序列中的位置,此位置值即为结果
即ABuTLSimilar.calc_similar返回值由0至1,这样的好处是通过计算600036与601766在所有股票中的相似度水平,会更全局客观的体现相关性。
下面calc_similar的show=True进行可视化,它可视化了两支股票的价格距离,如下所示:
Step26: 4. 自然相关性
上面说的相关性都是通过走势数值计算度量出的相关性,在股票市场中存在另一种自然相关性,即行业分类的相关。
如下示例abupy中行业分类的相关api,首先从市场中找到与600036处在一个行业中的所有股票,返回dataframe对象,返回中有公司的一些基本信息和财务数据:
Step27: 自然相关性来自人为的对行业进行分类,可以通过industries_factorize获取某个市场中所有行业分类信息,如下获取港股市场中所有行业:
Step28: 如果你对上面行业分类中的序号6
Step29: 如果你在回测中想使用比如上面两个分类中的所有股票进行回测,可以使用如下接口获取行业中所有股票symbol序列:
Step30: 如果已经有了想要查询的具体行业目标,比如想要从A股市场中模糊查询医学相关的股票,可以使用如下接口:
Step31: 如果从A股市场中模糊查询医学相关所有股票进行回测,可以使用如下接口获取行业中所有股票symbol序列: | Python Code:
# 基础库导入
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
from abupy import abu, ml, nd, tl, ECoreCorrType, ABuSymbolPd, ABuScalerUtil, AbuFuturesCn, ABuCorrcoef
from abupy import find_similar_with_cnt, find_similar_with_se, find_similar_with_folds
from abupy import EMarketTargetType, ABuStatsUtil, ABuSimilar, ABuIndustries
Explanation: ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第14节 量化相关性分析应用</b></font>
</center>
作者: 阿布
阿布量化版权所有 未经允许 禁止转载
abu量化系统github地址 (欢迎+star)
本节ipython notebook
交易目标之间的相关性分析是量化交易中一个非常重要的工具,本节将示例abupy中相关分析模块的使用示例:
首先导入abupy中本节使用的模块:
End of explanation
us_choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS', 'us.IXIC']
cn_choice_symbols = ['002230', '300104', '300059', '601766', '600085', '600036', '600809', '000002', '002594', '002739', 'sh000001']
hk_choice_symbols = ['hk03333', 'hk00700', 'hk02333', 'hk01359', 'hk00656', 'hk03888', 'hk02318', 'hkHSI']
tc_choice_symbols = ['btc', 'ltc']
# 期货市场的直接从AbuFuturesCn().symbo中读取
ft_choice_symbols = AbuFuturesCn().symbol.tolist()
all_choice_symbols = us_choice_symbols + cn_choice_symbols + hk_choice_symbols + tc_choice_symbols +ft_choice_symbols
len(all_choice_symbols)
Explanation: 受限于沙盒中数据限制,本节示例的相关性分析只限制在abupy内置沙盒数据中,首先将内置沙盒中美股,A股,港股, 比特币,莱特币,期货市场中的symbol都列出来:
End of explanation
panel = ABuSymbolPd.make_kl_df(all_choice_symbols, start='2015-07-27', end='2016-07-26',
show_progress=True)
# 转换panel轴方向,即可方便获取所有金融时间数据的某一个列
panel = panel.swapaxes('items', 'minor')
net_cg_df = panel['p_change'].fillna(value=0)
net_cg_df.head()
Explanation: 如上所示abupy内置沙盒中的symbol共75个,下面组装这75个symbol形成一个三维panel数据,摘取所有数据的p_change(涨跌幅)列形成二维dataframe对象,如下所示:
End of explanation
ABuCorrcoef.corr_xy(net_cg_df.btc, net_cg_df['601766'], similar_type=ECoreCorrType.E_CORE_TYPE_PEARS)
Explanation: 1. 相关相似度的度量
机器学习与比特币示例章节通过比特币的数据特征组成了学习数据,通过训练数据来决策是否有大波动,对于机器学习来说获取更多特征,更多数据始终是优化的最重要且最有效的手段,如果你需要从其它市场找到和比特币有相关性的品种,变换形成新的特征,最简单的方法可以使用corr_xy接口:
End of explanation
ABuCorrcoef.corr_xy(net_cg_df.btc, net_cg_df['601766'], similar_type=ECoreCorrType.E_CORE_TYPE_SPERM)
Explanation: 上面通过默认参数获取601766与比特币的相关度,数值为-0.04,可以通过关键子参数similar_type来切换相关的计算方法,如下使用E_CORE_TYPE_SPERM:
End of explanation
ABuCorrcoef.corr_xy(net_cg_df.btc, net_cg_df['601766'], similar_type=ECoreCorrType.E_CORE_TYPE_SIGN)
Explanation: 下面使用E_CORE_TYPE_SIGN,sign的相关度度量,只关注符号,不关心具体数据,即比如今天比特币涨,那只要601766今天也涨就正相关,不会理会具体涨多少,实际上这种度量方式更适合交易类型产品:
End of explanation
ABuCorrcoef.corr_xy(net_cg_df.btc, net_cg_df['601766'], similar_type=ECoreCorrType.E_CORE_TYPE_ROLLING)
Explanation: 下面使用时间加权相关E_CORE_TYPE_ROLLING,进行计算,即时间上越早的相关值所占权重越小,时间上越晚的相关值权重越大:
End of explanation
corr_df = ABuCorrcoef.corr_matrix(net_cg_df, )
corr_df.btc.sort_values()[::-1][:10]
Explanation: 上面的corr_xy的接口参数只能是两个品种的相关度度量,如果想要从上面net_cg_df的75种品种中找到与比特币最相关的品种很麻烦,使用corr_matrix接口可以一次输出计算:
End of explanation
corr_df = ABuCorrcoef.corr_matrix(net_cg_df, similar_type=ECoreCorrType.E_CORE_TYPE_SIGN)
corr_df.btc.sort_values()[::-1][:10]
Explanation: corr_matrix接口同样支持关键字参数similar_type,如下使用E_CORE_TYPE_SIGN:
End of explanation
corr_df = ABuCorrcoef.corr_matrix(net_cg_df, ECoreCorrType.E_CORE_TYPE_ROLLING)
corr_df.btc.sort_values()[:10]
Explanation: 上面输出的都是正相关的top10,发现除了ltc(莱特币)外,其它的相关度也都在0附近,都没有超过0.1,即都基本无关,实际上不管是正相关还是负相关都成为可以有用的素材,下面看一下时间加权相关的top10负相关的品种:
End of explanation
ABuStatsUtil.euclidean_distance_xy(net_cg_df.btc, net_cg_df.ltc)
Explanation: 如上所示负相关的数值有几个已经上到0.1以上了,但是由于沙盒数据中的交易品种所限,并没有找到适合构成特征(强正相关,或者强负相关)的品种。
备注:之后的章节在使用非沙盒数据的前提下,会编写在完整的各个全市场中寻找与比特币最强正相关,最强负相关的示例,以及通过它们构成特征,实现策略的优化
2. 距离的度量与相似度
与相似度度量相反的是两个矢量之间距离的度量,如下示例在abupy中的距离度量接口使用:
End of explanation
ABuStatsUtil.manhattan_distances_xy(net_cg_df.btc, net_cg_df.ltc)
Explanation: 上面的接口度量了比特币和莱特币之间的欧式距离(L2范数),下面度量曼哈顿距离(L1范数):
End of explanation
ABuStatsUtil.cosine_distances_xy(net_cg_df.btc, net_cg_df.ltc)
Explanation: 下面度量余弦距离:
End of explanation
ABuStatsUtil.cosine_distances_xy(net_cg_df.btc, net_cg_df.ltc, to_similar=True)
Explanation: 上面接口cosine_distances_xy度量了比特币和莱特币之间的余弦距离,0.23为距离的度量值,距离和相似度之间可以通过关键字参数to_similar=True将余弦距离直接转换为余弦相似度,如下所示:
End of explanation
euclidean_df = ABuStatsUtil.euclidean_distance_matrix(net_cg_df)
euclidean_df.btc.sort_values()[:10]
Explanation: 和相似度接口类似,xy接口只能两个直接度量,通过matrix接口可实现矩阵的度量,如下所示:
End of explanation
manhattan_df = ABuStatsUtil.euclidean_distance_matrix(net_cg_df, to_similar=True)
manhattan_df.btc.sort_values()[::-1][:10]
Explanation: 可以看到与比特币距离最短的是莱特币,同时可以通to_similar=True将距离度量值转换为相似度,如下所示:
End of explanation
manhattan_df = ABuStatsUtil.manhattan_distance_matrix(net_cg_df, to_similar=True)
manhattan_df.btc.median()
Explanation: 上面度量了欧式距离(L2范数),下面度量曼哈顿距离(L1范数)的matrix接口:
End of explanation
cosine_df = ABuStatsUtil.cosine_distance_matrix(net_cg_df, to_similar=True)
cosine_df.btc.sort_values()[::-1][:10]
Explanation: 上面度量结果的中位数,值为0.37,很高,因为L1范数和L2范数针对相似度的度量只是相对的,只在数据范围内,数据之间进行数据比较统计的意义不大,如上ltc的值和WHO差不多大,余弦距离与它们不同,如下示例,可以看到ltc与usTSLA数值差别很大:
End of explanation
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
similar_dict = find_similar_with_cnt('600036', 252)
Explanation: 备注:与上述接口的使用类似,通过ABuScalerUtil.scaler_xy针对两组矢量进行标准化,通过ABuScalerUtil.scaler_matrix针对矩阵进行标准化。
3. 相似相关接口的应用
下面示例在abupy中相关相似上层封装的接口的使用,如下通过将市场设置为通过E_MARKET_TARGET_CN,获取A股全市场数据与600036进行对比,可视化最相关的top10个:
备注:由于使用的是沙盒数据,所以市场中本身数据就只有几个,如果在非沙盒环境下,将从市场中几千个symbol中度量相关性:
End of explanation
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_HK
_ = find_similar_with_se('hk02333', start='2016-07-26', end='2017-07-26', corr_type=ECoreCorrType.E_CORE_TYPE_PEARS)
Explanation: 上面的接口使用的是find_similar_with_cnt,参数252为天数,即度量最近一年的相关性,下面切换市场为港股市场,接口使用find_similar_with_se参数通过start,end进行设置时间,关键字参数corr_type代表度量相关算法,使用查看hk02333与港股市场的相关性:
End of explanation
_ = find_similar_with_cnt('btc', 252, corr_type=ECoreCorrType.E_CORE_TYPE_PEARS)
Explanation: 下面的市场继续在港股市场,但参数symbol使用比特币,即度量的结果为比特币与港股市场的相关性:
End of explanation
_ = find_similar_with_folds('btc', n_folds=1, corr_type=ECoreCorrType.E_CORE_TYPE_ROLLING)
Explanation: 下面的市场继续在港股市场,但接口使用find_similar_with_folds,参数参数n_folds代表年数,symbol使用比特币,corr_type使用时间加权相关度量,如下所示:
End of explanation
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
rank_score, _ = tl.similar.calc_similar('600036', '601766', corr_jobs=(ECoreCorrType.E_CORE_TYPE_PEARS,
ECoreCorrType.E_CORE_TYPE_SPERM), show=False)
rank_score
Explanation: 上面的接口通过corr_type来切好相关度度量算法,如果想使用多种度量算法,同时度量,可以使用calc_similar接口,使用如下所示:
End of explanation
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
rank_score, _ = tl.similar.calc_similar('600036', '601766', corr_jobs=(ECoreCorrType.E_CORE_TYPE_ROLLING, ECoreCorrType.E_CORE_TYPE_ROLLING), show=False)
rank_score
Explanation: 上面使用E_CORE_TYPE_PEARS和E_CORE_TYPE_SPERM共同做为度量算法,下面使用+-号相关和时间加权相关,如下所示:
End of explanation
ABuCorrcoef.corr_xy(net_cg_df['600036'], net_cg_df['601766'])
Explanation: 观察上面的输出值都很高,通过corr_xy度量两支股票的相关度只有0.458,如下所示:
End of explanation
rank_score, _ = tl.similar.calc_similar('600036', '601766', show=True)
Explanation: 实际上ABuTLSimilar.calc_similar()度量的是两支股票相对整个市场的相关性评级,它不关心某一个股票具体相关性的数值的大小,calc_similar(a, b) 的工作流程如下:
计算a与市场中所有股票的相关性
将所有相关性进行rank排序
查询股票b在rank序列中的位置,此位置值即为结果
即ABuTLSimilar.calc_similar返回值由0至1,这样的好处是通过计算600036与601766在所有股票中的相似度水平,会更全局客观的体现相关性。
下面calc_similar的show=True进行可视化,它可视化了两支股票的价格距离,如下所示:
End of explanation
pd.options.display.max_rows = 100
pd.options.display.max_columns = 100
# 找到与600036处在一个行业中的所有股票, 返回dataframe对象
industries, _ = ABuIndustries.industries_df('600036')
# 输出显示后5个股票信息
industries.tail()
Explanation: 4. 自然相关性
上面说的相关性都是通过走势数值计算度量出的相关性,在股票市场中存在另一种自然相关性,即行业分类的相关。
如下示例abupy中行业分类的相关api,首先从市场中找到与600036处在一个行业中的所有股票,返回dataframe对象,返回中有公司的一些基本信息和财务数据:
End of explanation
ABuIndustries.industries_factorize(market=EMarketTargetType.E_MARKET_TARGET_HK)
Explanation: 自然相关性来自人为的对行业进行分类,可以通过industries_factorize获取某个市场中所有行业分类信息,如下获取港股市场中所有行业:
End of explanation
# 6:赌场与赌博,序号9:中国房地产 只显示后5支股票信息
ABuIndustries.query_factorize_industry_df([6, 9], market=EMarketTargetType.E_MARKET_TARGET_HK).tail()
Explanation: 如果你对上面行业分类中的序号6:赌场与赌博,序号9:中国房地产,比较感兴趣,可以使用如下方式查询这两个行业中的所有股票信息:
End of explanation
ABuIndustries.query_factorize_industry_symbol([6, 9], market=EMarketTargetType.E_MARKET_TARGET_HK)[:5]
Explanation: 如果你在回测中想使用比如上面两个分类中的所有股票进行回测,可以使用如下接口获取行业中所有股票symbol序列:
End of explanation
# 从A股市场中查询医学相关的股票显示前5个
ABuIndustries.query_match_industries_df('医学', market=EMarketTargetType.E_MARKET_TARGET_CN).head()
Explanation: 如果已经有了想要查询的具体行业目标,比如想要从A股市场中模糊查询医学相关的股票,可以使用如下接口:
End of explanation
ABuIndustries.query_match_industries_symbol('医学', market=EMarketTargetType.E_MARKET_TARGET_CN)[:5]
Explanation: 如果从A股市场中模糊查询医学相关所有股票进行回测,可以使用如下接口获取行业中所有股票symbol序列:
End of explanation |
552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.e - TD noté, 21 février 2017
Solution du TD noté, celui-ci présente un algorithme pour calculer les coefficients d'une régression quantile et par extension d'une médiane dans un espace à plusieurs dimensions.
Step1: Précision
Step2: Q2
La médiane d'un ensemble de points $\left{X_1, ..., X_n\right}$ est une valeur $X_M$ telle que
Step3: Q3
Lorsque le nombre de points est pair, la médiane peut être n'importe quelle valeur dans un intervalle. Modifier votre fonction de façon à ce que la fonction précédente retourne le milieu de la fonction.
Step4: Q4
Pour un ensemble de points $E=\left{X_1, ..., X_n\right}$, on considère la fonction suivante
Step5: Q6
Ecrire une fonction qui transforme un vecteur en une matrice diagonale.
Step6: Q7
On considère maintenant que chaque observation est pondérée par un poids $w_i$. On veut maintenant trouver le vecteur $\beta$ qui minimise
Step7: Q8
Ecrire une fonction qui calcule les quantités suivantes (fonctions maximum, reciprocal).
$$z_i = \frac{1}{\max\left( \delta, \left|y_i - X_i \beta\right|\right)}$$
Step8: Q9
On souhaite coder l'algorithme suivant
Step9: Q10
Step10: La régression linéaire égale la moyenne, l'algorithme s'approche de la médiane.
Quelques explications et démonstrations
Cet énoncé est inspiré de Iteratively reweighted least squares. Cet algorithme permet notamment d'étendre la notion de médiane à des espaces vectoriels de plusieurs dimensions. On peut détermine un point $X_M$ qui minimise la quantité
Step11: Par défaut, numpy considère un vecteur de taille (3,) comme un vecteur ligne (3,1). Donc l'expression suivante va marcher
Step12: Ou | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.e - TD noté, 21 février 2017
Solution du TD noté, celui-ci présente un algorithme pour calculer les coefficients d'une régression quantile et par extension d'une médiane dans un espace à plusieurs dimensions.
End of explanation
import random
def ensemble_aleatoire(n):
res = [random.randint(0, 100) for i in range(n)]
res[0] = 1000
return res
ens = ensemble_aleatoire(10)
ens
Explanation: Précision : dans tout l'énoncé, la transposée d'une matrice est notée $X' = X^{T}$. La plupart du temps $X$ et $Y$ désignent des vecteurs colonnes. $\beta$ désigne un vecteur ligne, $W$ une matrice diagonale.
Exercice 1
Q1
A l'aide du module random, générer un ensemble de points aléatoires.
End of explanation
def mediane(ensemble):
tri = list(sorted(ensemble))
return tri[len(tri)//2]
mediane(ens)
Explanation: Q2
La médiane d'un ensemble de points $\left{X_1, ..., X_n\right}$ est une valeur $X_M$ telle que :
$$\sum_i \mathbb{1}{X_i < X_m} = \sum_i \mathbb{1}{X_i > X_m}$$
Autrement dit, il y a autant de valeurs inférieures que supérieures à $X_M$. On obtient cette valeur en triant les éléments par ordre croissant et en prenant celui du milieu.
End of explanation
def mediane(ensemble):
tri = list(sorted(ensemble))
if len(tri) % 2 == 0:
m = len(tri)//2
return (tri[m] + tri[m-1]) / 2
else:
return tri[len(tri)//2]
mediane(ens)
Explanation: Q3
Lorsque le nombre de points est pair, la médiane peut être n'importe quelle valeur dans un intervalle. Modifier votre fonction de façon à ce que la fonction précédente retourne le milieu de la fonction.
End of explanation
from numpy.linalg import inv
def regression_lineaire(X, Y):
t = X.T
return inv(t @ X) @ t @ Y
import numpy
X = numpy.array(ens).reshape((len(ens), 1))
regression_lineaire(X, X+1) # un essai pour vérifier que la valeur n'est pas aberrante
Explanation: Q4
Pour un ensemble de points $E=\left{X_1, ..., X_n\right}$, on considère la fonction suivante :
$$f(x) = \sum_{i=1}^n \left | x - X_i\right |$$.
On suppose que la médiane $X_M$ de l'ensemble $E$ n'appartient pas à $E$ : $X_M \notin E$. Que vaut $f'(X_M)$ ?
On acceptera le fait que la médiane est le seul point dans ce cas.
$$f'(X_m) = - \sum_{i=1}^n \mathbb{1}{X_i < X_m} + \sum{i=1}^n \mathbb{1}_{X_i > X_m}$$
Par définition de la médiane, $f'(X_M)=0$. En triant les éléments, on montre que la $f'(x) = 0 \Longleftrightarrow x=X_m$.
Q5
On suppose qu'on dispose d'un ensemble d'observations $\left(X_i, Y_i\right)$ avec $X_i, Y_i \in \mathbb{R}$.
La régression linéaire consiste une relation linéaire $Y_i = a X_i + b + \epsilon_i$
qui minimise la variance du bruit. On pose :
$$E(a, b) = \sum_i \left(Y_i - (a X_i + b)\right)^2$$
On cherche $a, b$ tels que :
$$a^, b^ = \arg \min E(a, b) = \arg \min \sum_i \left(Y_i - (a X_i + b)\right)^2$$
La fonction est dérivable et on trouve :
$$\frac{\partial E(a,b)}{\partial a} = - 2 \sum_i X_i ( Y_i - (a X_i + b)) \text{ et } \frac{\partial E(a,b)}{\partial b} = - 2 \sum_i ( Y_i - (a X_i + b))$$
Il suffit alors d'annuler les dérivées. On résoud un système d'équations linéaires. On note :
$$\begin{array}{l} \mathbb{E} X = \frac{1}{n}\sum_{i=1}^n X_i \text{ et } \mathbb{E} Y = \frac{1}{n}\sum_{i=1}^n Y_i \ \mathbb{E}{X^2} = \frac{1}{n}\sum_{i=1}^n X_i^2 \text{ et } \mathbb{E} {XY} = \frac{1}{n}\sum_{i=1}^n X_i Y_i \end{array}$$
Finalement :
$$\begin{array}{l} a^ = \frac{ \mathbb{E} {XY} - \mathbb{E} X \mathbb{E} Y}{\mathbb{E}{X^2} - (\mathbb{E} X)^2} \text{ et } b^ = \mathbb{E} Y - a^* \mathbb{E} X \end{array}$$
Lorsqu'on a plusieurs dimensions pour $X$, on écrit le problème d'optimisation, on cherche les coefficients $\beta^*$ qui minimisent :
$$E(\beta)=\sum_{i=1}^n \left(y_i - X_i \beta\right)^2 = \left \Vert Y - X\beta \right \Vert ^2$$
La solution est : $\beta^* = (X'X)^{-1}X'Y$.
Ecrire une fonction qui calcule ce vecteur optimal.
End of explanation
def matrice_diagonale(W):
return numpy.diag(W)
matrice_diagonale([1, 2, 3])
Explanation: Q6
Ecrire une fonction qui transforme un vecteur en une matrice diagonale.
End of explanation
def regression_lineaire_ponderee(X, Y, W):
if len(W.shape) == 1 or W.shape[0] != W.shape[1]:
# c'est un vecteur
W = matrice_diagonale(W.ravel())
wx = W @ X
xt = X.T
return inv(xt @ wx) @ xt @ W @ Y
X = numpy.array(sorted(ens)).reshape((len(ens), 1))
Y = X.copy()
Y[0] = max(X)
W = numpy.ones(len(ens))
W[0] = 0
regression_lineaire_ponderee(X, Y, W), regression_lineaire(X, Y)
Explanation: Q7
On considère maintenant que chaque observation est pondérée par un poids $w_i$. On veut maintenant trouver le vecteur $\beta$ qui minimise :
$$E(\beta)=\sum_{i=1}^n w_i \left( y_i - X_i \beta \right)^2 = \left \Vert W^{\frac{1}{2}}(Y - X\beta)\right \Vert^2$$
Où $W=diag(w_1, ..., w_n)$ est la matrice diagonale. La solution est :
$$\beta_* = (X'WX)^{-1}X'WY$$.
Ecrire une fonction qui calcule la solution de la régression pondérée. La fonction ravel est utile.
End of explanation
def calcule_z(X, beta, Y, W, delta=0.0001):
epsilon = numpy.abs(Y - X @ beta)
return numpy.reciprocal(numpy.maximum(epsilon, numpy.ones(epsilon.shape) * delta))
calcule_z(X * 1.0, numpy.array([[1.01]]), Y, W)
Explanation: Q8
Ecrire une fonction qui calcule les quantités suivantes (fonctions maximum, reciprocal).
$$z_i = \frac{1}{\max\left( \delta, \left|y_i - X_i \beta\right|\right)}$$
End of explanation
def algorithm(X, Y, delta=0.0001):
W = numpy.ones(X.shape[0])
for i in range(0, 10):
beta = regression_lineaire_ponderee(X, Y, W)
W = calcule_z(X, beta, Y, W, delta=delta)
E = numpy.abs(Y - X @ beta).sum()
print(i, E, beta)
return beta
X = numpy.random.rand(10, 1)
Y = X*2 + numpy.random.rand()
Y[0] = Y[0] + 100
algorithm(X, Y)
regression_lineaire(X, Y)
Explanation: Q9
On souhaite coder l'algorithme suivant :
$w_i^{(1)} = 1$
$\beta_{(t)} = (X'W^{(t)}X)^{-1}X'W^{(t)}Y$
$w_i^{(t+1)} = \frac{1}{\max\left( \delta, \left|y_i - X_i \beta^{(t)}\right|\right)}$
$t = t+1$
Retour à l'étape 2.
End of explanation
ens = ensemble_aleatoire(10)
Y = numpy.empty((len(ens), 1))
Y[:,0] = ens
X = numpy.ones((len(ens), 1))
mediane(ens)
Y.mean(axis=0)
regression_lineaire(X, Y)
algorithm(X,Y)
mediane(ens)
list(sorted(ens))
Explanation: Q10
End of explanation
import numpy
y = numpy.array([1, 2, 3])
M = numpy.array([[3, 4], [6, 7], [3, 3]])
M.shape, y.shape
try:
M @ y
except Exception as e:
print(e)
Explanation: La régression linéaire égale la moyenne, l'algorithme s'approche de la médiane.
Quelques explications et démonstrations
Cet énoncé est inspiré de Iteratively reweighted least squares. Cet algorithme permet notamment d'étendre la notion de médiane à des espaces vectoriels de plusieurs dimensions. On peut détermine un point $X_M$ qui minimise la quantité :
$$\sum_{i=1}^n \left| X_i - X_M \right |$$
Nous reprenons l'algorithme décrit ci-dessus :
$w_i^{(1)} = 1$
$\beta_{(t)} = (X'W^{(t)}X)^{-1}X'W^{(t)}Y$
$w_i^{(t+1)} = \frac{1}{\max\left( \delta, \left|y_i - X_i \beta^{(t)}\right|\right)}$
$t = t+1$
Retour à l'étape 2.
L'erreur quadratique pondéré est :
$$E_2(\beta, W) = \sum_{i=1}^n w_i \left\Vert Y_i - X_i \beta \right\Vert^2$$
Si $w_i = \frac{1}{\left|y_i - X_i \beta\right|}$, on remarque que :
$$E_2(\beta, W) = \sum_{i=1}^n \frac{\left\Vert Y_i - X_i \beta \right\Vert^2}{\left|y_i - X_i \beta\right|} = \sum_{i=1}^n \left|y_i - X_i \beta\right| = E_1(\beta)$$
On retombe sur l'erreur en valeur absolue optimisée par la régression quantile. Comme l'étape 2 consiste à trouver les coefficients $\beta$ qui minimise $E_2(\beta, W^{(t)})$, par construction, il ressort que :
$$E_1(\beta^{(t+1)}) = E_2(\beta^{(t+1)}, W^{(t)}) \leqslant E_2(\beta^{(t)}, W^{(t)}) = E_1(\beta^{(t)})$$
La suite $t \rightarrow E_1(\beta^{(t)})$ est suite décroissant est minorée par 0. Elle converge donc vers un minimum. Or la fonction $\beta \rightarrow E_1(\beta)$ est une fonction convexe. Elle n'admet qu'un seul minimum (mais pas nécessaire un seul point atteignant ce minimum). L'algorithme converge donc vers la médiane. Le paramètre $\delta$ est là pour éviter les erreurs de divisions par zéros et les approximations de calcul faites par l'ordinateur.
Quelques commentaires sur le code
Le symbol @ a été introduit par Python 3.5 et est équivalent à la fonction numpy.dot. Les dimensions des matrices posent souvent quelques problèmes.
End of explanation
y @ M
Explanation: Par défaut, numpy considère un vecteur de taille (3,) comme un vecteur ligne (3,1). Donc l'expression suivante va marcher :
End of explanation
M.T @ y
Explanation: Ou :
End of explanation |
553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unconstrained global optimization with Scipy
TODO
Step1: Define the objective function
Step2: The "basin-hopping" algorithm
Basin-hopping is a stochastic algorithm which attempts to find the global minimum of a function.
Official documentation
Step3: Performances analysis
Step4: Benchmark
Step5: The "Differential Evolution" (DE) algorithm
Differential Evolution is a stochastic algorithm which attempts to find the global minimum of a function.
Official documentation
Step6: Performances analysis
Step7: Benchmark | Python Code:
# Init matplotlib
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (8, 8)
# Setup PyAI
import sys
sys.path.insert(0, '/Users/jdecock/git/pub/jdhp/pyai')
import numpy as np
import time
import warnings
from scipy import optimize
# Plot functions
from pyai.optimize.utils import plot_contour_2d_solution_space
from pyai.optimize.utils import plot_2d_solution_space
from pyai.optimize.utils import array_list_to_array
from pyai.optimize.utils import plot_fx_wt_iteration_number
from pyai.optimize.utils import plot_err_wt_iteration_number
from pyai.optimize.utils import plot_err_wt_execution_time
from pyai.optimize.utils import plot_err_wt_num_feval
Explanation: Unconstrained global optimization with Scipy
TODO:
* Plots:
0. error w.t. ... => add an option to plot the current solution or the best current solution
4. error w.t. number of function evaluations + error w.t. total number of function evaluations (i.e. including the number of gradient and hessian evaluations)
6. (benchmark session ! distinguish the derivative-free to the non-derivative free case) average version of 3., 4., 5. over several runs with random initial state (+ error bar or box plot)
7. (benchmark session) err w.t. algorithms parameters (plot the iteration or evaluation number or execution time to reach in average an error lower than N% with e.g. N=99%)
Import required modules
End of explanation
## Objective function: Rosenbrock function (Scipy's implementation)
#func = scipy.optimize.rosen
# Set the objective function
#from pyai.optimize.functions import sphere as func
from pyai.optimize.functions import sphere2d as func
#from pyai.optimize.functions import additive_gaussian_noise as noise
from pyai.optimize.functions import multiplicative_gaussian_noise as noise
#from pyai.optimize.functions import additive_poisson_noise as noise
func.noise = noise # Comment this line to use a deterministic objective function
xmin = func.bounds[0]
xmax = func.bounds[1]
print(func)
print(xmin)
print(xmax)
print(func.ndim)
print(func.arg_min)
print(func(func.arg_min))
Explanation: Define the objective function
End of explanation
from scipy import optimize
x0 = np.random.uniform(-10., 10., size=2)
res = optimize.basinhopping(optimize.rosen,
x0, # The initial point
niter=100) # The number of basin hopping iterations
print("x* =", res.x)
print("f(x*) =", res.fun)
print("Cause of the termination:", ";".join(res.message))
print("Number of evaluations of the objective functions:", res.nfev)
print("Number of evaluations of the jacobian:", res.njev)
print("Number of iterations performed by the optimizer:", res.nit)
print(res)
Explanation: The "basin-hopping" algorithm
Basin-hopping is a stochastic algorithm which attempts to find the global minimum of a function.
Official documentation:
* https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.basinhopping.html#scipy.optimize.basinhopping
* More information about the algorithm: http://www-wales.ch.cam.ac.uk/
Basic usage
End of explanation
%%time
it_x_list = []
it_fx_list = []
it_time_list = []
it_num_eval_list = []
def callback(x, f, accept):
it_x_list.append(x)
it_fx_list.append(f)
it_time_list.append(time.time() - init_time)
if hasattr(func, 'num_eval'):
it_num_eval_list.append(func.num_eval)
print(len(it_x_list), x, f, accept, it_num_eval_list[-1])
x_init = np.random.random(func.ndim) # draw samples in [0.0, 1.0)
min_bounds = func.bounds[0]
max_bounds = func.bounds[1]
x_init *= (max_bounds - min_bounds)
x_init += min_bounds
func.do_eval_logs = True
func.reset_eval_counters()
func.reset_eval_logs()
init_time = time.time()
with warnings.catch_warnings():
warnings.simplefilter("ignore")
res = optimize.basinhopping(func,
x_init, # The initial point
niter=100, # The number of basin hopping iterations
callback=callback,
disp=False) # Print status messages
func.do_eval_logs = False
eval_x_array = np.array(func.eval_logs_dict['x']).T
eval_error_array = np.array(func.eval_logs_dict['fx']) - func(func.arg_min)
it_x_array = np.array(it_x_list).T
it_error_array = np.array(it_fx_list) - func(func.arg_min)
it_time_array = np.array(it_time_list)
it_num_eval_array = np.array(it_num_eval_list)
print("x* =", res.x)
print("f(x*) =", res.fun)
print("Cause of the termination:", ";".join(res.message))
print("Number of evaluations of the objective functions:", res.nfev)
print("Number of evaluations of the jacobian:", res.njev)
print("Number of iterations performed by the optimizer:", res.nit)
plot_contour_2d_solution_space(func,
xmin=xmin,
xmax=xmax,
xstar=res.x,
xvisited=it_x_array,
title="Basin-Hopping");
plot_contour_2d_solution_space(func,
xmin=xmin,
xmax=xmax,
xstar=res.x,
xvisited=eval_x_array,
title="Basin-Hopping");
print(eval_x_array.shape)
print(eval_error_array.shape)
print(it_x_array.shape)
print(it_error_array.shape)
print(it_time_array.shape)
print(it_num_eval_array.shape)
fig, ax = plt.subplots(nrows=1, ncols=3, squeeze=True, figsize=(15, 5))
ax = ax.ravel()
plot_err_wt_iteration_number(it_error_array, ax=ax[0], x_log=True, y_log=True)
plot_err_wt_execution_time(it_error_array, it_time_array, ax=ax[1], x_log=True, y_log=True)
plot_err_wt_num_feval(it_error_array, it_num_eval_array, ax=ax[2], x_log=True, y_log=True)
plt.tight_layout(); # Fix plot margins errors
plot_err_wt_num_feval(eval_error_array, x_log=True, y_log=True)
Explanation: Performances analysis
End of explanation
%%time
eval_error_array_list = []
NUM_RUNS = 100
for run_index in range(NUM_RUNS):
x_init = np.random.random(func.ndim) # draw samples in [0.0, 1.0)
min_bounds = func.bounds[0]
max_bounds = func.bounds[1]
x_init *= (max_bounds - min_bounds)
x_init += min_bounds
func.do_eval_logs = True
func.reset_eval_counters()
func.reset_eval_logs()
with warnings.catch_warnings():
warnings.simplefilter("ignore")
res = optimize.basinhopping(func,
x_init, # The initial point
niter=100, # The number of basin hopping iterations
disp=False) # Print status messages
func.do_eval_logs = False
eval_error_array = np.array(func.eval_logs_dict['fx']) - func(func.arg_min)
print("x* =", res.x)
print("f(x*) =", res.fun)
#print("Cause of the termination:", ";".join(res.message))
#print("Number of evaluations of the objective functions:", res.nfev)
#print("Number of evaluations of the jacobian:", res.njev)
#print("Number of iterations performed by the optimizer:", res.nit)
eval_error_array_list.append(eval_error_array);
plot_err_wt_num_feval(array_list_to_array(eval_error_array_list), x_log=True, y_log=True, plot_option="mean")
Explanation: Benchmark
End of explanation
from scipy import optimize
bounds = [[-10, 10], [-10, 10]]
res = optimize.differential_evolution(optimize.rosen,
bounds, # The initial point
maxiter=100, # The number of DE iterations
polish=True)
print("x* =", res.x)
print("f(x*) =", res.fun)
print("Cause of the termination:", res.message)
print("Number of evaluations of the objective functions:", res.nfev)
print("Number of iterations performed by the optimizer:", res.nit)
print(res)
Explanation: The "Differential Evolution" (DE) algorithm
Differential Evolution is a stochastic algorithm which attempts to find the global minimum of a function.
Official documentation:
* https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution
More information:
* Practical advice
* Wikipedia article
Basic usage
End of explanation
%%time
bounds = func.bounds.T.tolist()
it_x_list = []
it_fx_list = []
it_time_list = []
it_num_eval_list = []
def callback(xk, convergence):
it_x_list.append(xk)
it_fx_list.append(func(xk))
it_time_list.append(time.time() - init_time)
if hasattr(func, 'num_eval'):
it_num_eval_list.append(func.num_eval)
print(len(it_x_list), xk, it_fx_list[-1], convergence, it_num_eval_list[-1])
func.do_eval_logs = True
func.reset_eval_counters()
func.reset_eval_logs()
init_time = time.time()
with warnings.catch_warnings():
warnings.simplefilter("ignore")
res = optimize.differential_evolution(func,
bounds, # The initial point
maxiter=100, # The number of DE iterations
callback=callback,
polish=False,
disp=False) # Print status messages
func.do_eval_logs = False
eval_x_array = np.array(func.eval_logs_dict['x']).T
eval_error_array = np.array(func.eval_logs_dict['fx']) - func(func.arg_min)
it_x_array = np.array(it_x_list).T
it_error_array = np.array(it_fx_list) - func(func.arg_min)
it_time_array = np.array(it_time_list)
it_num_eval_array = np.array(it_num_eval_list)
print("x* =", res.x)
print("f(x*) =", res.fun)
print("Cause of the termination:", res.message)
print("Number of evaluations of the objective functions:", res.nfev)
print("Number of iterations performed by the optimizer:", res.nit)
plot_contour_2d_solution_space(func,
xmin=xmin,
xmax=xmax,
xstar=res.x,
xvisited=it_x_array,
title="Differential Evolution");
plot_contour_2d_solution_space(func,
xmin=xmin,
xmax=xmax,
xstar=res.x,
xvisited=eval_x_array,
title="Differential Evolution");
fig, ax = plt.subplots(nrows=1, ncols=3, squeeze=True, figsize=(15, 5))
ax = ax.ravel()
plot_err_wt_iteration_number(it_error_array, ax=ax[0], x_log=True, y_log=True)
plot_err_wt_execution_time(it_error_array, it_time_array, ax=ax[1], x_log=True, y_log=True)
plot_err_wt_num_feval(it_error_array, it_num_eval_array, ax=ax[2], x_log=True, y_log=True)
plt.tight_layout(); # Fix plot margins errors
plot_err_wt_num_feval(eval_error_array, x_log=True, y_log=True);
Explanation: Performances analysis
End of explanation
%%time
eval_error_array_list = []
NUM_RUNS = 100
for run_index in range(NUM_RUNS):
bounds = func.bounds.T.tolist()
func.do_eval_logs = True
func.reset_eval_counters()
func.reset_eval_logs()
with warnings.catch_warnings():
warnings.simplefilter("ignore")
res = optimize.differential_evolution(func,
bounds, # The initial point
maxiter=100, # The number of DE iterations
polish=False,
disp=False) # Print status messages
func.do_eval_logs = False
eval_error_array = np.array(func.eval_logs_dict['fx']) - func(func.arg_min)
print("x* =", res.x)
print("f(x*) =", res.fun)
#print("Cause of the termination:", ";".join(res.message))
#print("Number of evaluations of the objective functions:", res.nfev)
#print("Number of evaluations of the jacobian:", res.njev)
#print("Number of iterations performed by the optimizer:", res.nit)
eval_error_array_list.append(eval_error_array);
plot_err_wt_num_feval(array_list_to_array(eval_error_array_list), x_log=True, y_log=True, plot_option="mean")
Explanation: Benchmark
End of explanation |
554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
벡터 공간
벡터의 기하학적 의미
길이가 $K$인 벡터(vector) $a$는 $K$차원의 공간에서 원점과 벡터 $a$의 값으로 표시되는 점을 연결한 화살표(arrow)로 간주할 수 있다.
$$ a = \begin{bmatrix}1 \ 2 \end{bmatrix} $$
Step1: 벡터의 길이
벡터 $a$ 의 길이를 놈(norm) $\| a \|$ 이라고 하며 다음과 같이 계산할 수 있다.
$$ \| a \| = \sqrt{a^T a } = \sqrt{a_1^2 + \cdots + a_K^2} $$
numpy의 linalg 서브 패키지의 norm 명령으로 벡터의 길이를 계산할 수 있다.
Step2: 단위 벡터
길이가 1인 벡터를 단위 벡터(unit vector)라고 한다. 예를 들어 다음과 같은 벡터들은 모두 단위 벡터이다.
$$
a = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
b = \begin{bmatrix}0 \ 1\end{bmatrix} ,\;\;
c = \begin{bmatrix} \dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix}
$$
Step3: 벡터의 합
벡터와 벡터의 합은 벡터가 된다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 1\end{bmatrix} \;\;\; \rightarrow \;\;\;
c = a + b = \begin{bmatrix}3 \ 3\end{bmatrix} \;\;
$$
Step4: 벡터의 집합 중에서 집합의 원소인 두 벡터의 선형 조합(스칼라 곱의 합)이 그 집합의 원소이면 벡터 공간이라고 한다.
$$ a, b \in \mathbf{R} \;\; \text{ and } \;\; \alpha_1a + \alpha_2b \in \mathbf{R} $$
벡터의 분해
어떤 두 벡터 $a$, $b$의 합이 다른 벡터 $c$가 될 때 $c$가 두 벡터 성분(vector component) $a$, $b$으로 분해(decomposition)된다고 말할 수 있다.
벡터의 직교
두 벡터 $a$와 $b$가 이루는 각이 90도이면 서로 직교(orthogonal)라고 하며 $ a \perp b $로 표시한다.
서로 직교인 두 벡터의 벡터 내적(inner product, dot product)는 0이된다.
$$ a^T b = b^T a = 0 \;\;\;\; \leftrightarrow \;\;\;\; a \perp b $$
예를 들어 다음 두 벡터는 서로 직교한다.
$$
a = \begin{bmatrix}1 \ 1\end{bmatrix} ,\;\;
b = \begin{bmatrix}-1 \ 1\end{bmatrix} \;\;\;\; \rightarrow \;\;\;\;
a^T b = \begin{bmatrix}1 & 1\end{bmatrix} \begin{bmatrix}-1 \ 1\end{bmatrix} = -1 + 1 = 0
$$
Step5: 투영
벡터 $a$를 다른 벡터 $b$에 직교하는 성분 $a_1$ 와 나머지 성분 $a_2 = a - a_1$로 분해할 수 있다. 이 때 $a_2$는 $b$와 평행하며 이 길이를 벡터 $a$의 벡터 $b$에 대한 투영(projection)이라고 한다.
벡터의 투영은 다음과 같이 내적을 사용하여 구할 수 있다.
$$ a = a_1 + a_2 $$
$$ a_1 \perp b \;\; \text{ and } \;\; a_2 = a - a_1 $$
이면
$$ \| a_2 \| = a^T\dfrac{b}{\|b\|} = \dfrac{a^Tb}{\|b\|} = \dfrac{b^Ta}{\|b\|} $$
이다.
또한 두 벡터 사이의 각도 $\theta$는 다음과 같이 구한다.
$$ \cos\theta = \dfrac{\| a_2 \|}{\| a \|} = \dfrac{a^Tb}{\|a\|\|b\|}$$
Step6: 벡터의 선형 종속과 선형 독립
벡터들의 선형 조합이 0이 되는 모두 0이 아닌 스칼라값들이 존재하면 그 벡터들은 선형 종속(linearly dependent)이라고 한다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}3 \ 3\end{bmatrix} \;\;
c = \begin{bmatrix}10 \ 14\end{bmatrix} \;\;
$$
$$
2a + b - \frac{1}{2}c = 0
$$
Step7: 벡터들의 선형 조합이 0이 되는 모두 0이 아닌 스칼라값들이 존재하지 않으면 그 벡터들은 선형 독립(linearly independent)이라고 한다.
$$ \alpha_1 a_1 + \cdots + \alpha_K a_K = 0 \;\;\;\; \leftrightarrow \;\;\;\; \alpha_1 = \cdots = \alpha_K = 0 $$
기저 벡터
벡터 공간에 속하는 벡터의 집합이 선형 독립이고 다른 모든 벡터 공간의 벡터들이 그 벡터 집합의 선형 조합으로 나타나면 그 벡터 집합을 벡터 공간의 기저 벡터(basis vector)라고 한다.
예를 들어 다음과 같은 두 벡터는 2차원 벡터 공간의 기저 벡터이다.
$$
a = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
b = \begin{bmatrix}0 \ 1\end{bmatrix} \;\;
$$
또는
$$
a = \begin{bmatrix}1 \ 1\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 3\end{bmatrix} \;\;
$$
다음과 같은 두 벡터는 2차원 벡터 공간의 기저 벡터가 될 수 없다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 4\end{bmatrix} \;\;
$$
열 공간
행렬은 열 벡터의 집합으로 볼 수 있다. 이 때 열 벡터들의 조합으로 생성되는 벡터 공간을 열 공간(column space)이라고 한다.
$$
A = \begin{bmatrix} 1 & 5 & 6 \ 2 & 6 & 8 \ 7 & 1 & 8 \end{bmatrix}
\;\;\;\; \rightarrow \;\;\;\;
\alpha_1 \begin{bmatrix} 1 \ 2 \ 7 \end{bmatrix} +
\alpha_2 \begin{bmatrix} 5 \ 6 \ 1 \end{bmatrix} +
\alpha_3 \begin{bmatrix} 6 \ 8 \ 8 \end{bmatrix}
\; \in \; \text{column space}
$$
열 랭크
행렬의 열 벡터 중 서로 독립인 열 벡터의 최대 갯수를 열 랭크(column rank) 혹은 랭크(rank)라고 한다.
예를 들어 다음 행렬의 랭크는 2이다.
$$
A = \begin{bmatrix} 1 & 5 & 6 \ 2 & 6 & 8 \ 3 & 11 & 14 \end{bmatrix}
$$
numpy의 linalg 서브 패키지의 matrix_rank 명령으로 랭크를 계산할 수 있다.
Step8: 좌표
벡터의 성분, 즉 좌표(coordinate)는 표준 기저 벡터들에 대한 해당 벡터의 투영(projection)으로 볼 수 있다.
Step9: 좌표 변환
새로운 기저 벡터를에 대해 벡터 투영을 계산하는 것을 좌표 변환(coordinate transform)이라고 한다.
좌표 변환은 새로운 기저 벡터로 이루어진 변환 행렬(transform matrix) $A$ 와의 내적으로 계산한다.
$$ Aa' = a $$
$$ a' = A^{-1}a $$
예를 들어, 기존의 기저 벡터가
$$
e_1 = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
e_2 = \begin{bmatrix}0 \ 1\end{bmatrix} \;\;
$$
이면 벡터 $a$는 사실
$$
a = \begin{bmatrix}2 \ 2\end{bmatrix} = 2 \begin{bmatrix}1 \ 0\end{bmatrix} + 2 \begin{bmatrix}0 \ 1 \end{bmatrix} = 2 e_1 + 2 e_2
$$
새로운 기저 벡터가
$$
g_1 = \begin{bmatrix} \dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix} ,\;\;
g_2 = \begin{bmatrix} -\dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix} ,\;\;
$$
이면 벡터 $a$의 좌표는 다음과 같이 바뀐다.
$$
a = \begin{bmatrix}2 \ 2\end{bmatrix} \;\;\;\; \rightarrow \;\;\;\;
a' = A^{-1}a =
\begin{bmatrix}
e'_1 & e'_2
\end{bmatrix}
a
=
\begin{bmatrix}
\dfrac{1}{\sqrt{2}} & -\dfrac{1}{\sqrt{2}} \
\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}}
\end{bmatrix}^{-1}
\begin{bmatrix}2 \ 2\end{bmatrix}
=
\begin{bmatrix}
\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}} \
-\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}}
\end{bmatrix}
\begin{bmatrix}2 \ 2\end{bmatrix}
= \begin{bmatrix}2\sqrt{2}\0\end{bmatrix}
$$ | Python Code:
a = [1, 2]
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='black'))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(a[0], a[1], 'ro', ms=10)
plt.text(0.35, 1.15, "$a$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-2.4, 3.4)
plt.ylim(-1.2, 3.2)
plt.show()
Explanation: 벡터 공간
벡터의 기하학적 의미
길이가 $K$인 벡터(vector) $a$는 $K$차원의 공간에서 원점과 벡터 $a$의 값으로 표시되는 점을 연결한 화살표(arrow)로 간주할 수 있다.
$$ a = \begin{bmatrix}1 \ 2 \end{bmatrix} $$
End of explanation
a = np.array([1, 1])
np.linalg.norm(a)
Explanation: 벡터의 길이
벡터 $a$ 의 길이를 놈(norm) $\| a \|$ 이라고 하며 다음과 같이 계산할 수 있다.
$$ \| a \| = \sqrt{a^T a } = \sqrt{a_1^2 + \cdots + a_K^2} $$
numpy의 linalg 서브 패키지의 norm 명령으로 벡터의 길이를 계산할 수 있다.
End of explanation
a = np.array([1, 0])
b = np.array([0, 1])
c = np.array([1/np.sqrt(2), 1/np.sqrt(2)])
np.linalg.norm(a), np.linalg.norm(b), np.linalg.norm(c)
Explanation: 단위 벡터
길이가 1인 벡터를 단위 벡터(unit vector)라고 한다. 예를 들어 다음과 같은 벡터들은 모두 단위 벡터이다.
$$
a = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
b = \begin{bmatrix}0 \ 1\end{bmatrix} ,\;\;
c = \begin{bmatrix} \dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix}
$$
End of explanation
a = np.array([1, 2])
b = np.array([2, 1])
c = a + b
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.annotate('', xy=b, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.annotate('', xy=c, xytext=(0,0), arrowprops=dict(facecolor='black'))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(a[0], a[1], 'ro', ms=10)
plt.plot(b[0], b[1], 'ro', ms=10)
plt.plot(c[0], c[1], 'ro', ms=10)
plt.plot([a[0], c[0]], [a[1], c[1]], 'k--')
plt.plot([b[0], c[0]], [b[1], c[1]], 'k--')
plt.text(0.35, 1.15, "$a$", fontdict={"size": 18})
plt.text(1.15, 0.25, "$b$", fontdict={"size": 18})
plt.text(1.25, 1.45, "$c$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-1.4, 4.4)
plt.ylim(-0.6, 3.8)
plt.show()
Explanation: 벡터의 합
벡터와 벡터의 합은 벡터가 된다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 1\end{bmatrix} \;\;\; \rightarrow \;\;\;
c = a + b = \begin{bmatrix}3 \ 3\end{bmatrix} \;\;
$$
End of explanation
a = np.array([1, 1])
b = np.array([-1, 1])
np.dot(a, b)
Explanation: 벡터의 집합 중에서 집합의 원소인 두 벡터의 선형 조합(스칼라 곱의 합)이 그 집합의 원소이면 벡터 공간이라고 한다.
$$ a, b \in \mathbf{R} \;\; \text{ and } \;\; \alpha_1a + \alpha_2b \in \mathbf{R} $$
벡터의 분해
어떤 두 벡터 $a$, $b$의 합이 다른 벡터 $c$가 될 때 $c$가 두 벡터 성분(vector component) $a$, $b$으로 분해(decomposition)된다고 말할 수 있다.
벡터의 직교
두 벡터 $a$와 $b$가 이루는 각이 90도이면 서로 직교(orthogonal)라고 하며 $ a \perp b $로 표시한다.
서로 직교인 두 벡터의 벡터 내적(inner product, dot product)는 0이된다.
$$ a^T b = b^T a = 0 \;\;\;\; \leftrightarrow \;\;\;\; a \perp b $$
예를 들어 다음 두 벡터는 서로 직교한다.
$$
a = \begin{bmatrix}1 \ 1\end{bmatrix} ,\;\;
b = \begin{bmatrix}-1 \ 1\end{bmatrix} \;\;\;\; \rightarrow \;\;\;\;
a^T b = \begin{bmatrix}1 & 1\end{bmatrix} \begin{bmatrix}-1 \ 1\end{bmatrix} = -1 + 1 = 0
$$
End of explanation
a = np.array([1, 2])
b = np.array([2, 0])
a2 = np.dot(a, b)/np.linalg.norm(b) * np.array([1, 0])
a1 = a - a2
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.annotate('', xy=b, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.annotate('', xy=a2, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.annotate('', xy=a1, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(a[0], a[1], 'ro', ms=10)
plt.plot(b[0], b[1], 'ro', ms=10)
plt.text(0.35, 1.15, "$a$", fontdict={"size": 18})
plt.text(1.55, 0.15, "$b$", fontdict={"size": 18})
plt.text(-0.2, 1.05, "$a_1$", fontdict={"size": 18})
plt.text(0.50, 0.15, "$a_2$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-1.5, 3.5)
plt.ylim(-0.5, 3)
plt.show()
Explanation: 투영
벡터 $a$를 다른 벡터 $b$에 직교하는 성분 $a_1$ 와 나머지 성분 $a_2 = a - a_1$로 분해할 수 있다. 이 때 $a_2$는 $b$와 평행하며 이 길이를 벡터 $a$의 벡터 $b$에 대한 투영(projection)이라고 한다.
벡터의 투영은 다음과 같이 내적을 사용하여 구할 수 있다.
$$ a = a_1 + a_2 $$
$$ a_1 \perp b \;\; \text{ and } \;\; a_2 = a - a_1 $$
이면
$$ \| a_2 \| = a^T\dfrac{b}{\|b\|} = \dfrac{a^Tb}{\|b\|} = \dfrac{b^Ta}{\|b\|} $$
이다.
또한 두 벡터 사이의 각도 $\theta$는 다음과 같이 구한다.
$$ \cos\theta = \dfrac{\| a_2 \|}{\| a \|} = \dfrac{a^Tb}{\|a\|\|b\|}$$
End of explanation
a = np.array([1, 2])
b = np.array([3, 3])
c = np.array([10, 14])
2*a + b - 0.5*c
Explanation: 벡터의 선형 종속과 선형 독립
벡터들의 선형 조합이 0이 되는 모두 0이 아닌 스칼라값들이 존재하면 그 벡터들은 선형 종속(linearly dependent)이라고 한다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}3 \ 3\end{bmatrix} \;\;
c = \begin{bmatrix}10 \ 14\end{bmatrix} \;\;
$$
$$
2a + b - \frac{1}{2}c = 0
$$
End of explanation
A = np.array([[1, 5, 6], [2, 6, 8], [3, 11, 14]])
np.linalg.matrix_rank(A)
Explanation: 벡터들의 선형 조합이 0이 되는 모두 0이 아닌 스칼라값들이 존재하지 않으면 그 벡터들은 선형 독립(linearly independent)이라고 한다.
$$ \alpha_1 a_1 + \cdots + \alpha_K a_K = 0 \;\;\;\; \leftrightarrow \;\;\;\; \alpha_1 = \cdots = \alpha_K = 0 $$
기저 벡터
벡터 공간에 속하는 벡터의 집합이 선형 독립이고 다른 모든 벡터 공간의 벡터들이 그 벡터 집합의 선형 조합으로 나타나면 그 벡터 집합을 벡터 공간의 기저 벡터(basis vector)라고 한다.
예를 들어 다음과 같은 두 벡터는 2차원 벡터 공간의 기저 벡터이다.
$$
a = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
b = \begin{bmatrix}0 \ 1\end{bmatrix} \;\;
$$
또는
$$
a = \begin{bmatrix}1 \ 1\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 3\end{bmatrix} \;\;
$$
다음과 같은 두 벡터는 2차원 벡터 공간의 기저 벡터가 될 수 없다.
$$
a = \begin{bmatrix}1 \ 2\end{bmatrix} ,\;\;
b = \begin{bmatrix}2 \ 4\end{bmatrix} \;\;
$$
열 공간
행렬은 열 벡터의 집합으로 볼 수 있다. 이 때 열 벡터들의 조합으로 생성되는 벡터 공간을 열 공간(column space)이라고 한다.
$$
A = \begin{bmatrix} 1 & 5 & 6 \ 2 & 6 & 8 \ 7 & 1 & 8 \end{bmatrix}
\;\;\;\; \rightarrow \;\;\;\;
\alpha_1 \begin{bmatrix} 1 \ 2 \ 7 \end{bmatrix} +
\alpha_2 \begin{bmatrix} 5 \ 6 \ 1 \end{bmatrix} +
\alpha_3 \begin{bmatrix} 6 \ 8 \ 8 \end{bmatrix}
\; \in \; \text{column space}
$$
열 랭크
행렬의 열 벡터 중 서로 독립인 열 벡터의 최대 갯수를 열 랭크(column rank) 혹은 랭크(rank)라고 한다.
예를 들어 다음 행렬의 랭크는 2이다.
$$
A = \begin{bmatrix} 1 & 5 & 6 \ 2 & 6 & 8 \ 3 & 11 & 14 \end{bmatrix}
$$
numpy의 linalg 서브 패키지의 matrix_rank 명령으로 랭크를 계산할 수 있다.
End of explanation
e1 = np.array([1, 0])
e2 = np.array([0, 1])
a = np.array([2, 2])
plt.annotate('', xy=e1, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.annotate('', xy=e2, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='gray'))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(a[0], a[1], 'ro', ms=10)
plt.text(1.05, 1.35, "$a$", fontdict={"size": 18})
plt.text(-0.2, 0.5, "$e_1$", fontdict={"size": 18})
plt.text(0.5, -0.2, "$e_2$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-1.5, 3.5)
plt.ylim(-0.5, 3)
plt.show()
Explanation: 좌표
벡터의 성분, 즉 좌표(coordinate)는 표준 기저 벡터들에 대한 해당 벡터의 투영(projection)으로 볼 수 있다.
End of explanation
e1 = np.array([1, 0])
e2 = np.array([0, 1])
a = np.array([2, 2])
g1 = np.array([1, 1])/np.sqrt(2)
g2 = np.array([-1, 1])/np.sqrt(2)
plt.annotate('', xy=e1, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.annotate('', xy=e2, xytext=(0,0), arrowprops=dict(facecolor='green'))
plt.annotate('', xy=g1, xytext=(0,0), arrowprops=dict(facecolor='red'))
plt.annotate('', xy=g2, xytext=(0,0), arrowprops=dict(facecolor='red'))
plt.annotate('', xy=a, xytext=(0,0), arrowprops=dict(facecolor='gray', alpha=0.5))
plt.plot(0, 0, 'ro', ms=10)
plt.plot(a[0], a[1], 'ro', ms=10)
plt.text(1.05, 1.35, "$a$", fontdict={"size": 18})
plt.text(-0.2, 0.5, "$e_1$", fontdict={"size": 18})
plt.text(0.5, -0.2, "$e_2$", fontdict={"size": 18})
plt.text(0.2, 0.5, "$g_1$", fontdict={"size": 18})
plt.text(-0.6, 0.2, "$g_2$", fontdict={"size": 18})
plt.xticks(np.arange(-2, 4))
plt.yticks(np.arange(-1, 4))
plt.xlim(-1.5, 3.5)
plt.ylim(-0.5, 3)
plt.show()
A = np.vstack([g1, g2]).T
A
Ainv = np.linalg.inv(A)
Ainv
Ainv.dot(a)
Explanation: 좌표 변환
새로운 기저 벡터를에 대해 벡터 투영을 계산하는 것을 좌표 변환(coordinate transform)이라고 한다.
좌표 변환은 새로운 기저 벡터로 이루어진 변환 행렬(transform matrix) $A$ 와의 내적으로 계산한다.
$$ Aa' = a $$
$$ a' = A^{-1}a $$
예를 들어, 기존의 기저 벡터가
$$
e_1 = \begin{bmatrix}1 \ 0\end{bmatrix} ,\;\;
e_2 = \begin{bmatrix}0 \ 1\end{bmatrix} \;\;
$$
이면 벡터 $a$는 사실
$$
a = \begin{bmatrix}2 \ 2\end{bmatrix} = 2 \begin{bmatrix}1 \ 0\end{bmatrix} + 2 \begin{bmatrix}0 \ 1 \end{bmatrix} = 2 e_1 + 2 e_2
$$
새로운 기저 벡터가
$$
g_1 = \begin{bmatrix} \dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix} ,\;\;
g_2 = \begin{bmatrix} -\dfrac{1}{\sqrt{2}} \ \dfrac{1}{\sqrt{2}} \end{bmatrix} ,\;\;
$$
이면 벡터 $a$의 좌표는 다음과 같이 바뀐다.
$$
a = \begin{bmatrix}2 \ 2\end{bmatrix} \;\;\;\; \rightarrow \;\;\;\;
a' = A^{-1}a =
\begin{bmatrix}
e'_1 & e'_2
\end{bmatrix}
a
=
\begin{bmatrix}
\dfrac{1}{\sqrt{2}} & -\dfrac{1}{\sqrt{2}} \
\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}}
\end{bmatrix}^{-1}
\begin{bmatrix}2 \ 2\end{bmatrix}
=
\begin{bmatrix}
\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}} \
-\dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{2}}
\end{bmatrix}
\begin{bmatrix}2 \ 2\end{bmatrix}
= \begin{bmatrix}2\sqrt{2}\0\end{bmatrix}
$$
End of explanation |
555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Neural Networks
In this notebook, we train a CNN to classify images from the CIFAR-10 database.
1. Load CIFAR-10 Database
Step1: 2. Visualize the First 24 Training Images
Step2: 3. Rescale the Images by Dividing Every Pixel in Every Image by 255
Step3: 4. Break Dataset into Training, Testing, and Validation Sets
Step4: 5. Define the Model Architecture
Step5: 6. Compile the Model
Step6: 7. Train the Model
Step7: 8. Load the Model with the Best Validation Accuracy
Step8: 9. Calculate Classification Accuracy on Test Set
Step9: 10. Visualize Some Predictions
This may give you some insight into why the network is misclassifying certain objects. | Python Code:
import keras
from keras.datasets import cifar10
# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
Explanation: Convolutional Neural Networks
In this notebook, we train a CNN to classify images from the CIFAR-10 database.
1. Load CIFAR-10 Database
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
Explanation: 2. Visualize the First 24 Training Images
End of explanation
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
Explanation: 3. Rescale the Images by Dividing Every Pixel in Every Image by 255
End of explanation
from keras.utils import np_utils
# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# print shape of training set
print('x_train shape:', x_train.shape)
# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')
Explanation: 4. Break Dataset into Training, Testing, and Validation Sets
End of explanation
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu',
input_shape=(32, 32, 3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(10, activation='softmax'))
model.summary()
Explanation: 5. Define the Model Architecture
End of explanation
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
Explanation: 6. Compile the Model
End of explanation
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=100,
validation_data=(x_valid, y_valid), callbacks=[checkpointer],
verbose=2, shuffle=True)
Explanation: 7. Train the Model
End of explanation
# load the weights that yielded the best validation accuracy
model.load_weights('model.weights.best.hdf5')
Explanation: 8. Load the Model with the Best Validation Accuracy
End of explanation
# evaluate and print test accuracy
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
Explanation: 9. Calculate Classification Accuracy on Test Set
End of explanation
# get predictions on the test set
y_hat = model.predict(x_test)
# define text labels (source: https://www.cs.toronto.edu/~kriz/cifar.html)
cifar10_labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
# plot a random sample of test images, their predicted labels, and ground truth
fig = plt.figure(figsize=(20, 8))
for i, idx in enumerate(np.random.choice(x_test.shape[0], size=32, replace=False)):
ax = fig.add_subplot(4, 8, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_test[idx]))
pred_idx = np.argmax(y_hat[idx])
true_idx = np.argmax(y_test[idx])
ax.set_title("{} ({})".format(cifar10_labels[pred_idx], cifar10_labels[true_idx]),
color=("green" if pred_idx == true_idx else "red"))
Explanation: 10. Visualize Some Predictions
This may give you some insight into why the network is misclassifying certain objects.
End of explanation |
556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CNN for CIFAR10
CNN model that can be used for classification tasks.
In this demo, we will train a 3-layer CNN on the CIFAR10 dataset. We will show 2 implementations of the CNN model. First is using PyTorch's built-in nn.Conv2d API. Second, is we will use tensor level convolution.
Let us first import the required modules.
Step1: CNN using PyTorch nn.Conv2D
In this example, we use nn.Conv2D to create a 3-layer CNN model. Note the following
Step2: Scratch pad idea on how to do convolution
In the code below, the idea of convolution is tested on an image.
A 3 x 3 kernel is used. All elements are set to 1. If this kernel is convolved with an RGB image, the result should be a grayscale image since each pixel is computed as the mean of dot product of 9 pixels with the kernel.
Step3: PyTorch Lightning Module for CNN
This is the PL module so we can easily change the implementation of the CNN and compare the results. More detailed results can be found on the wandb.ai page.
Using model parameter, we can easily switch between different model implementations. We also benchmark the result using a ResNet18 model.
Step4: Arguments
Please change the --model argument to switch between the different models to be used as CIFAR10 classifier.
The argument --conv2d can be used to switch between the two implementations 2d convolutional layer.
Step5: Weights and Biases Callback
The callback logs train and validation metrics to wandb. It also logs sample predictions. This is similar to our WandbCallback example for MNIST.
Step6: Training and Validation of Different Models
The validation accuracy of our SimpleCNN at ~73%.
Meanwhile the ResNet18 model has accuracy of ~78%. Consider that SimpleCNN uses 113k parameters while ResNet18 uses 11.2M parameters. SimpleCNN is very efficient. Recall that our SimpleMLP hsa accuracy of ~53% only. | Python Code:
import torch
import torchvision
import wandb
import math
import time
import numpy as np
import matplotlib.pyplot as plt
from torch import nn
from einops import rearrange
from argparse import ArgumentParser
from pytorch_lightning import LightningModule, Trainer, Callback
from pytorch_lightning.loggers import WandbLogger
from torchmetrics.functional import accuracy
from torch.optim import Adam
from torch.optim.lr_scheduler import CosineAnnealingLR
from matplotlib import image
Explanation: CNN for CIFAR10
CNN model that can be used for classification tasks.
In this demo, we will train a 3-layer CNN on the CIFAR10 dataset. We will show 2 implementations of the CNN model. First is using PyTorch's built-in nn.Conv2d API. Second, is we will use tensor level convolution.
Let us first import the required modules.
End of explanation
class SimpleCNN(nn.Module):
def __init__(self, n_features=3, kernel_size=3, n_filters=32, num_classes=10, conv2d=nn.Conv2d):
super().__init__()
self.conv1 = conv2d(n_features, n_filters, kernel_size=kernel_size)
self.conv2 = conv2d(n_filters, n_filters*2, kernel_size=kernel_size)
self.conv3 = conv2d(n_filters*2, n_filters*4, kernel_size=kernel_size)
self.fc1 = nn.Linear(2048, num_classes)
def forward(self, x):
y = nn.ReLU()(self.conv1(x))
y = nn.MaxPool2d(kernel_size=2)(y)
y = nn.ReLU()(self.conv2(y))
y = nn.MaxPool2d(kernel_size=2)(y)
y = nn.ReLU()(self.conv3(y))
y = rearrange(y, 'b c h w -> b (c h w)')
y = self.fc1(y)
return y
# we dont need to compute softmax since it is already
# built into the CE loss function in PyTorch
#return F.log_softmax(y, dim=1)
# use this to get the correct input shape for fc1. In this case,
# comment out y=self.fc1(y) and run the code below.
#model = SimpleCNN()
#data = torch.Tensor(1, 3, 32, 32)
#y = model(data)
#print("Y.shape:", y.shape)
Explanation: CNN using PyTorch nn.Conv2D
In this example, we use nn.Conv2D to create a 3-layer CNN model. Note the following:
1. The first layer number of input features is equal to the number of input RGB channels (3).
2. The output of the first layer is equal to the number of input features of the second layer.
3. The same matching for the second and third (last) layer.
4. We use nn.MaxPool2d to reduce the output feature map size.
5. At the same, we increase the number of feature maps after every layer.
6. We use nn.ReLU to activate the output of the layer.
7. For the last linear layer nn.Linear, the number of input features has to be supplied manually. Below in comment is a simple script that can be used to calculate the number of input features.
Ideas for experimentation:
1. Try other kernel sizes
2. Try deeper models
3. Try different activation functions
4. Try applying skip connections
End of explanation
# load a sample image from the filesystem
img = image.imread("wonder_cat.jpg") / 255.0
#img = image.imread("aki_dog.jpg") / 255.0
print("Original Image shape:", img.shape)
# split the image into p1 x p2 patches.
# a kernel of size k, number of filters is 3, number of input filters is 3 (RGB)
k = 3
n_filters = 3
n_features = 3
kernel = np.ones((n_features * k * k, n_filters))
# kernel = rearrange(kernel, 'b c h w -> (c h w) b')
img = img[::,::,:]
#y = []
wk = k * (img.shape[0] // k)
hk = k * (img.shape[1] // k)
wf = img.shape[0] % k
hf = img.shape[1] % k
print("Image shape:", img.shape)
# Tensor z will be used to store output of convolution
z = np.ones((img.shape[0]-k+1, img.shape[1]-k+1, img.shape[2]))
print("Z shape:", z.shape)
for i in range(k):
hoff = i if hf >= i else (-k + i)
for j in range(k):
woff = j if wf >= j else (-k + j)
x = img[i: hk + hoff:, j: wk + woff:, :]
x = rearrange(x, "(h p1) (w p2) c -> h w (p1 p2 c)", p1=k, p2=k)
x = x @ kernel
# for testing like pooling
# Note: (p1 h) (p2 w) is wrong
#x = rearrange(x, "(h p1) (w p2) c -> h w p1 p2 c", p1=k, p2=k)
#x = reduce(x, "h w p1 p2 c -> h w c", 'mean')
z[i::k,j::k,:] = x
plt.imshow(img)
plt.axis('off')
plt.show()
print("max of z: ", np.max(z))
print("min of z: ", np.min(z))
z = z / np.max(z)
plt.imshow(z)
plt.axis('off')
plt.show()
class TensorConv2d(nn.Module):
def __init__(self, n_features, n_filters, kernel_size):
super().__init__()
self.n_features = n_features
self.kernel_size = kernel_size
self.n_filters = n_filters
self.kernel = nn.Parameter(torch.zeros((n_features * kernel_size * kernel_size, n_filters)))
self.bias = nn.Parameter(torch.zeros(n_filters))
self.reset_parameters()
def reset_parameters(self):
nn.init.constant_(self.bias, 0)
nn.init.kaiming_uniform_(self.kernel, a=math.sqrt(5))
def forward(self, x):
k = self.kernel_size
# make sure that kernel and bias are in the same device as x
if self.kernel.device != x.device:
self.kernel.to(x.device)
self.bias.to(x.device)
# batch, height, width
b = x.shape[0]
h = x.shape[2]
w = x.shape[3]
# making sure the feature map to be convolved is of the right size
# and we dont go past beyond the the feature map boundary
wk = k * (w // k)
hk = k * (h // k)
wf = w % k
hf = h % k
# Tensor Level Convolution
# Basic idea: (Repeat kernel_size times per row and per col)
# 1) convert an image into patches
# 2) perform convolution on each patch which is equivalent to
# - dot product of each patch with the kernel plus bias term
# 4) move 1 feature point along the horizontal axis (to be done kernel_size times)
# 5) go to 1)
# 6) move 1 feature point along the vertical axis (to be done kernel_size times)
# 7) go to 1)
# Tensor z contains the output of the convolution
# make sure tensor z is the correct device as x
z = torch.empty((b, self.n_filters, h-k+1, w-k+1)).to(x.device)
for i in range(k):
# row offset
# we need to perform offset k times
hoff = i if hf >= i else (-k + i)
for j in range(k):
# column offset
# we need to perform offset k times
woff = j if wf >= j else (-k + j)
# shift i row and j col
y = x[:, :, i: hk + hoff:, j: wk + woff:]
# convert to patches (p1 p2 c)
y = rearrange(y, "b c (h p1) (w p2) -> b h w (p1 p2 c)", p1=k, p2=k)
# dot product plus bias term
y = y @ self.kernel + self.bias
# sparse feature map: channel first
y = rearrange(y, 'b h w c -> b c h w')
# assign the feature map to the correct position in the output tensor
z[:,:,i::k,j::k] = y
return z
Explanation: Scratch pad idea on how to do convolution
In the code below, the idea of convolution is tested on an image.
A 3 x 3 kernel is used. All elements are set to 1. If this kernel is convolved with an RGB image, the result should be a grayscale image since each pixel is computed as the mean of dot product of 9 pixels with the kernel.
End of explanation
class LitCIFAR10Model(LightningModule):
def __init__(self, num_classes=10, lr=0.001, batch_size=64,
num_workers=4, max_epochs=30,
model=SimpleCNN, conv2d=nn.Conv2d):
super().__init__()
self.save_hyperparameters()
self.model = model(num_classes=num_classes, conv2d=conv2d)
self.loss = nn.CrossEntropyLoss()
def forward(self, x):
return self.model(x)
# this is called during fit()
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss(y_hat, y)
return {"loss": loss}
# calls to self.log() are recorded in wandb
def training_epoch_end(self, outputs):
avg_loss = torch.stack([x["loss"] for x in outputs]).mean()
self.log("train_loss", avg_loss, on_epoch=True)
# this is called at the end of an epoch
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = self.loss(y_hat, y)
acc = accuracy(y_hat, y) * 100.
# we use y_hat to display predictions during callback
return {"y_hat": y_hat, "test_loss": loss, "test_acc": acc}
# this is called at the end of all epochs
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x["test_loss"] for x in outputs]).mean()
avg_acc = torch.stack([x["test_acc"] for x in outputs]).mean()
self.log("test_loss", avg_loss, on_epoch=True, prog_bar=True)
self.log("test_acc", avg_acc, on_epoch=True, prog_bar=True)
# validation is the same as test
def validation_step(self, batch, batch_idx):
return self.test_step(batch, batch_idx)
def validation_epoch_end(self, outputs):
return self.test_epoch_end(outputs)
# we use Adam optimizer
def configure_optimizers(self):
optimizer = Adam(self.parameters(), lr=self.hparams.lr)
# this decays the learning rate to 0 after max_epochs using cosine annealing
scheduler = CosineAnnealingLR(optimizer, T_max=self.hparams.max_epochs)
return [optimizer], [scheduler]
# this is called after model instatiation to initiliaze the datasets and dataloaders
def setup(self, stage=None):
self.train_dataloader()
self.test_dataloader()
# build train and test dataloaders using MNIST dataset
# we use simple ToTensor transform
def train_dataloader(self):
return torch.utils.data.DataLoader(
torchvision.datasets.CIFAR10(
"./data", train=True, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=True,
num_workers=self.hparams.num_workers,
pin_memory=True,
)
def test_dataloader(self):
return torch.utils.data.DataLoader(
torchvision.datasets.CIFAR10(
"./data", train=False, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=False,
num_workers=self.hparams.num_workers,
pin_memory=True,
)
def val_dataloader(self):
return self.test_dataloader()
Explanation: PyTorch Lightning Module for CNN
This is the PL module so we can easily change the implementation of the CNN and compare the results. More detailed results can be found on the wandb.ai page.
Using model parameter, we can easily switch between different model implementations. We also benchmark the result using a ResNet18 model.
End of explanation
def get_args():
parser = ArgumentParser(description="PyTorch Lightning CIFAR10 Example")
parser.add_argument("--max-epochs", type=int, default=30, help="num epochs")
parser.add_argument("--batch-size", type=int, default=64, help="batch size")
parser.add_argument("--lr", type=float, default=0.001, help="learning rate")
parser.add_argument("--num-classes", type=int, default=10, help="num classes")
parser.add_argument("--devices", default=1)
parser.add_argument("--accelerator", default='gpu')
parser.add_argument("--num-workers", type=int, default=4, help="num workers")
#parser.add_argument("--model", default=torchvision.models.resnet18)
parser.add_argument("--model", default=SimpleCNN)
parser.add_argument("--conv2d", default=nn.Conv2d)
#parser.add_argument("--conv2d", default=TensorConv2d)
args = parser.parse_args("")
return args
Explanation: Arguments
Please change the --model argument to switch between the different models to be used as CIFAR10 classifier.
The argument --conv2d can be used to switch between the two implementations 2d convolutional layer.
End of explanation
class WandbCallback(Callback):
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
# process first 10 images of the first batch
if batch_idx == 0:
label_human = ["airplane", "automobile", "bird", "cat",
"deer", "dog", "frog", "horse", "ship", "truck"]
n = 10
x, y = batch
outputs = outputs["y_hat"]
outputs = torch.argmax(outputs, dim=1)
# log image, ground truth and prediction on wandb table
columns = ['image', 'ground truth', 'prediction']
data = [[wandb.Image(x_i), label_human[y_i], label_human[y_pred]] for x_i, y_i, y_pred in list(
zip(x[:n], y[:n], outputs[:n]))]
wandb_logger.log_table(
key=pl_module.model.__class__.__name__,
columns=columns,
data=data)
Explanation: Weights and Biases Callback
The callback logs train and validation metrics to wandb. It also logs sample predictions. This is similar to our WandbCallback example for MNIST.
End of explanation
if __name__ == "__main__":
args = get_args()
model = LitCIFAR10Model(num_classes=args.num_classes,
lr=args.lr, batch_size=args.batch_size,
num_workers=args.num_workers,
model=args.model, conv2d=args.conv2d)
model.setup()
# printing the model is useful for debugging
print(model)
print(model.model.__class__.__name__)
# wandb is a great way to debug and visualize this model
#wandb_logger = WandbLogger(project="cnn-cifar")
start_time = time.time()
trainer = Trainer(accelerator=args.accelerator,
devices=args.devices,
max_epochs=args.max_epochs,)
#logger=wandb_logger,
#callbacks=[WandbCallback()])
trainer.fit(model)
trainer.test(model)
elapsed_time = time.time() - start_time
print("Elapsed time: {}".format(elapsed_time))
#wandb.finish()
Explanation: Training and Validation of Different Models
The validation accuracy of our SimpleCNN at ~73%.
Meanwhile the ResNet18 model has accuracy of ~78%. Consider that SimpleCNN uses 113k parameters while ResNet18 uses 11.2M parameters. SimpleCNN is very efficient. Recall that our SimpleMLP hsa accuracy of ~53% only.
End of explanation |
557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature Extraction and Preprocessing
Step1: DictVectorizer
Step2: CountVectorizer
Step3: Stop Word Filtering
Step4: Stemming and Lemmatization
Lemmatization is the process of determining the lemma, or the morphological root, of an inflected word based on its context. Lemmas are the base forms of words that are used to key the word in a dictionary.
Stemming has a similar goal to lemmatization, but it does not attempt to produce the morphological roots of words. Instead, stemming removes all patterns of characters that appear to be affixes, resulting in a token that is not necessarily a valid word.
Lemmatization frequently requires a lexical resource, like WordNet, and the word's part of speech. Stemming
algorithms frequently use rules instead of lexical resources to produce stems and can
operate on any token, even without its context.
Step5: As we can see both sentences are having same meaning but their feature vectors have no elements in common. Let's use the lexical analysis on the data
Step6: The Porter stemmer cannot consider the inflected form's part of speech and returns gather for both documents
Step7: Extending bag-of-words with TF-IDF weights
It is intuitive that the frequency with which a word appears in a document could indicate the extent to which a document pertains to that word. A long document that contains one occurrence of a word may discuss an entirely different topic than a document that contains many occurrences of the same word. In this section, we will create feature vectors that encode the frequencies of words, and discuss strategies to mitigate two problems caused by encoding term frequencies.
Instead of using a binary value for each element in the feature vector, we will now use an integer that represents the number of times that the words appeared in the document.
Step8: Data Standardization | Python Code:
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, HashingVectorizer
from sklearn.metrics.pairwise import euclidean_distances
from sklearn import preprocessing
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem import PorterStemmer
from nltk import word_tokenize
from nltk import pos_tag
import numpy as np
Explanation: Feature Extraction and Preprocessing
End of explanation
onehot_encoder = DictVectorizer()
instances = [
{'city': 'New York'},
{'city': 'San Francisco'},
{'city': 'Chapel Hill'} ]
print (onehot_encoder.fit_transform(instances).toarray())
Explanation: DictVectorizer
End of explanation
corpus = [
'UNC played Duke in basketball',
'Duke lost the basketball game'
]
vectorizer = CountVectorizer()
print (vectorizer.fit_transform(corpus).todense())
print (vectorizer.vocabulary_)
# adding one more sentence in corpus
corpus = [
'UNC played Duke in basketball',
'Duke lost the basketball game',
'This is Atul Singh'
]
vectorizer = CountVectorizer()
print (vectorizer.fit_transform(corpus).todense())
print (vectorizer.vocabulary_)
# checking the euclidean distance
# converting sentence into CountVectorizer
counts = vectorizer.fit_transform(corpus).todense()
print("1 & 2", euclidean_distances(counts[0], counts[1]))
print("2 & 3", euclidean_distances(counts[1], counts[2]))
print("1 & 3", euclidean_distances(counts[0], counts[2]))
Explanation: CountVectorizer
End of explanation
vectorizer = CountVectorizer(stop_words='english') # added one option which remove the grammer words from corpus
print (vectorizer.fit_transform(corpus).todense())
print (vectorizer.vocabulary_)
print("1 & 2", euclidean_distances(counts[0], counts[1]))
print("2 & 3", euclidean_distances(counts[1], counts[2]))
print("1 & 3", euclidean_distances(counts[0], counts[2]))
Explanation: Stop Word Filtering
End of explanation
corpus = [
'He ate the sandwiches',
'Every sandwich was eaten by him'
]
vectorizer = CountVectorizer(stop_words='english') # added one option which remove the grammer words from corpus
print (vectorizer.fit_transform(corpus).todense())
print (vectorizer.vocabulary_)
Explanation: Stemming and Lemmatization
Lemmatization is the process of determining the lemma, or the morphological root, of an inflected word based on its context. Lemmas are the base forms of words that are used to key the word in a dictionary.
Stemming has a similar goal to lemmatization, but it does not attempt to produce the morphological roots of words. Instead, stemming removes all patterns of characters that appear to be affixes, resulting in a token that is not necessarily a valid word.
Lemmatization frequently requires a lexical resource, like WordNet, and the word's part of speech. Stemming
algorithms frequently use rules instead of lexical resources to produce stems and can
operate on any token, even without its context.
End of explanation
lemmatizer = WordNetLemmatizer()
print (lemmatizer.lemmatize('gathering', 'v'))
print (lemmatizer.lemmatize('gathering', 'n'))
Explanation: As we can see both sentences are having same meaning but their feature vectors have no elements in common. Let's use the lexical analysis on the data
End of explanation
stemmer = PorterStemmer()
print (stemmer.stem('gathering'))
wordnet_tags = ['n', 'v']
corpus = [
'He ate the sandwiches',
'Every sandwich was eaten by him'
]
stemmer = PorterStemmer()
print ('Stemmed:', [[stemmer.stem(token) for token in word_tokenize(document)] for document in corpus])
def lemmatize(token, tag):
if tag[0].lower() in ['n', 'v']:
return lemmatizer.lemmatize(token, tag[0].lower())
return token
lemmatizer = WordNetLemmatizer()
tagged_corpus = [pos_tag(word_tokenize(document)) for document in corpus]
print ('Lemmatized:', [[lemmatize(token, tag) for token, tag in document] for document in tagged_corpus])
Explanation: The Porter stemmer cannot consider the inflected form's part of speech and returns gather for both documents:
End of explanation
corpus = ['The dog ate a sandwich, the wizard transfigured a sandwich, and I ate a sandwich']
vectorizer = CountVectorizer(stop_words='english')
print (vectorizer.fit_transform(corpus).todense())
print(vectorizer.vocabulary_)
corpus = ['The dog ate a sandwich and I ate a sandwich',
'The wizard transfigured a sandwich']
vectorizer = TfidfVectorizer(stop_words='english')
print (vectorizer.fit_transform(corpus).todense())
print(vectorizer.vocabulary_)
corpus = ['The dog ate a sandwich and I ate a sandwich',
'The wizard transfigured a sandwich']
vectorizer = HashingVectorizer(n_features=6)
print (vectorizer.fit_transform(corpus).todense())
Explanation: Extending bag-of-words with TF-IDF weights
It is intuitive that the frequency with which a word appears in a document could indicate the extent to which a document pertains to that word. A long document that contains one occurrence of a word may discuss an entirely different topic than a document that contains many occurrences of the same word. In this section, we will create feature vectors that encode the frequencies of words, and discuss strategies to mitigate two problems caused by encoding term frequencies.
Instead of using a binary value for each element in the feature vector, we will now use an integer that represents the number of times that the words appeared in the document.
End of explanation
X = [[1,2,3],
[4,5,1],
[3,6,2]
]
print(preprocessing.scale(X))
x1 = preprocessing.StandardScaler()
print(x1)
print(x1.fit_transform(X))
Explanation: Data Standardization
End of explanation |
558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this example you will learn how to make use of the periodicity of the electrodes.
As seen in TB 4 the transmission calculation takes a considerable amount of time. In this example we will redo the same calculation, but speed it up (no approximations made).
A large computational effort is made on calculating the self-energies which basically is inverting, multiplying and adding matrices, roughly 10-20 times per $k$-point, per energy point, per electrode.
For systems with large electrodes compared to the full device, this becomes more demanding than calculating the Green function for the system.
When there is periodicity in electrodes along the transverse semi-infinite direction (not along the transport direction) one can utilize Bloch's theorem to reduce the computational cost of calculating the self-energy.
In ANY calculation if you have periodicity, please USE it.
In this example you should scour the tbtrans manual on how to enable Bloch's
theorem, and once enabled it should be roughly 3 - 4 times as fast, something that is non-negligeble for large systems.
Step1: Note the below lines are differing from the same lines in TB 4, i.e. we save the electrode electronic structure without extending it 25 times.
Step2: See TB 2 for details on why we choose repeat/tile on the Hamiltonian object and not on the geometry, prior to construction.
Step3: Exercises
Instead of analysing the same thing as in TB 4 you should perform the following actions to explore the available data-analysis capabilities of TBtrans. Please note the difference in run-time between example 04 and this example. Always use Bloch's theorem when applicable!
HINT please copy as much as you like from example 04 to simplify the following tasks.
Read in the resulting file into a variable called tbt.
In the following we will concentrate on only looking at $\Gamma$-point related quantities. I.e. all quantities should only be plotted for this $k$-point.
To extract information for one or more subset of points you should look into the function
help(tbt.kindex)
which may be used to find a resulting $k$-point index in the result file.
Plot the transmission ($\Gamma$-point only). To extract a subset $k$-point you should read the documentation for the functions (hint | Python Code:
graphene = sisl.geom.graphene(orthogonal=True)
Explanation: In this example you will learn how to make use of the periodicity of the electrodes.
As seen in TB 4 the transmission calculation takes a considerable amount of time. In this example we will redo the same calculation, but speed it up (no approximations made).
A large computational effort is made on calculating the self-energies which basically is inverting, multiplying and adding matrices, roughly 10-20 times per $k$-point, per energy point, per electrode.
For systems with large electrodes compared to the full device, this becomes more demanding than calculating the Green function for the system.
When there is periodicity in electrodes along the transverse semi-infinite direction (not along the transport direction) one can utilize Bloch's theorem to reduce the computational cost of calculating the self-energy.
In ANY calculation if you have periodicity, please USE it.
In this example you should scour the tbtrans manual on how to enable Bloch's
theorem, and once enabled it should be roughly 3 - 4 times as fast, something that is non-negligeble for large systems.
End of explanation
H_elec = sisl.Hamiltonian(graphene)
H_elec.construct(([0.1, 1.43], [0., -2.7]))
H_elec.write('ELEC.nc')
Explanation: Note the below lines are differing from the same lines in TB 4, i.e. we save the electrode electronic structure without extending it 25 times.
End of explanation
H = H_elec.repeat(25, axis=0).tile(15, axis=1)
H = H.remove(
H.geometry.close(
H.geometry.center(what='cell'), R=10.)
)
dangling = [ia for ia in H.geometry.close(H.geometry.center(what='cell'), R=14.)
if len(H.edges(ia)) < 3]
H = H.remove(dangling)
edge = [ia for ia in H.geometry.close(H.geometry.center(what='cell'), R=14.)
if len(H.edges(ia)) < 4]
edge = np.array(edge)
# Pretty-print the list of atoms
print(sisl.utils.list2str(edge + 1))
H.geometry.write('device.xyz')
H.write('DEVICE.nc')
Explanation: See TB 2 for details on why we choose repeat/tile on the Hamiltonian object and not on the geometry, prior to construction.
End of explanation
tbt = sisl.get_sile('siesta.TBT.nc')
# Easier manipulation of the geometry
geom = tbt.geometry
a_dev = tbt.a_dev # the indices where we have DOS
# Extract the DOS, per orbital (hence sum=False)
DOS = tbt.ADOS(0, sum=False)
# Normalize DOS for plotting (maximum size == 400)
# This array has *all* energy points and orbitals
DOS /= DOS.max() / 400
a_xyz = geom.xyz[a_dev, :2]
%%capture
fig = plt.figure(figsize=(12,4));
ax = plt.axes();
scatter = ax.scatter(a_xyz[:, 0], a_xyz[:, 1], 1);
ax.set_xlabel(r'$x$ [Ang]'); ax.set_ylabel(r'$y$ [Ang]');
ax.axis('equal');
# If this animation does not work, then don't spend time on it!
def animate(i):
ax.set_title('Energy {:.3f} eV'.format(tbt.E[i]));
scatter.set_sizes(DOS[i]);
return scatter,
anim = animation.FuncAnimation(fig, animate, frames=len(tbt.E), interval=100, repeat=False)
HTML(anim.to_html5_video())
Explanation: Exercises
Instead of analysing the same thing as in TB 4 you should perform the following actions to explore the available data-analysis capabilities of TBtrans. Please note the difference in run-time between example 04 and this example. Always use Bloch's theorem when applicable!
HINT please copy as much as you like from example 04 to simplify the following tasks.
Read in the resulting file into a variable called tbt.
In the following we will concentrate on only looking at $\Gamma$-point related quantities. I.e. all quantities should only be plotted for this $k$-point.
To extract information for one or more subset of points you should look into the function
help(tbt.kindex)
which may be used to find a resulting $k$-point index in the result file.
Plot the transmission ($\Gamma$-point only). To extract a subset $k$-point you should read the documentation for the functions (hint: kavg is the keyword you are looking for).
Full transmission
Bulk transmission
Plot the DOS with normalization according to the number of atoms ($\Gamma$ only)
You may decide which atoms you examine.
The Green function DOS
The spectral DOS
The bulk DOS
TIME: Do the same calculation using only tiling. H_elec.tile(25, axis=0).tile(15, axis=1) instead of repeat/tile. Which of repeat or tile are faster?
Transmission
Density of states
End of explanation |
559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyBroMo - GUI Trajectory explorer
<small><i>
This notebook is part of PyBroMo a
python-based single-molecule Brownian motion diffusion simulator
that simulates confocal smFRET
experiments. You can find the full list of notebooks in
Usage Examples.
</i></small>
Overview
In this notebook implements an interactive 3-D trajectories visualizer. To visualize trajectories you need simulatte the trajectories first.
For more info see PyBroMo Homepage.
Simulation setup
Together with a few standard python libraries we import PyBroMo using the short name pbm.
All PyBroMo functions will be available as pbm.something.
Step1: Load trajectories
Step2: Plotting the emission
Step3: For simulations using radial = False (i.e. the 3D trajectories saved)
Step4: For simulations using radial = True (i.e. the z-r 2D trajectories saved) | Python Code:
%matplotlib inline
import numpy as np
import tables
import matplotlib.pyplot as plt
plt.rcParams['path.simplify_threshold'] = 1.0
import pybromo as pbm
print('Numpy version:', np.__version__)
print('Matplotlib version:', plt.matplotlib.__version__)
print('PyTables version:', tables.__version__)
print('PyBroMo version:', pbm.__version__)
Explanation: PyBroMo - GUI Trajectory explorer
<small><i>
This notebook is part of PyBroMo a
python-based single-molecule Brownian motion diffusion simulator
that simulates confocal smFRET
experiments. You can find the full list of notebooks in
Usage Examples.
</i></small>
Overview
In this notebook implements an interactive 3-D trajectories visualizer. To visualize trajectories you need simulatte the trajectories first.
For more info see PyBroMo Homepage.
Simulation setup
Together with a few standard python libraries we import PyBroMo using the short name pbm.
All PyBroMo functions will be available as pbm.something.
End of explanation
#SIM_DIR = r'E:\Data\pybromo'
S = pbm.ParticlesSimulation.from_datafile('016') #, path=SIM_DIR)
Explanation: Load trajectories
End of explanation
%matplotlib qt
p = pbm.plotter.EmissionPlotter(S, duration=0.1, decimate=100, color_pop=False)
Explanation: Plotting the emission
End of explanation
p = pbm.plotter.TrackEmPlotter(S, duration=0.005, decimate=20)
Explanation: For simulations using radial = False (i.e. the 3D trajectories saved):
End of explanation
p = pbm.plotter.TrackEmPlotterR(S, duration=0.01, decimate=100)
Explanation: For simulations using radial = True (i.e. the z-r 2D trajectories saved):
End of explanation |
560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2. Parámetros de ecuaciones de estado cúbicas (SRK, PR, RKPR)
En esta sección se presenta una implementación en Python para calcular los parámetros de ecuaciones de estado cúbicas (SRK, PR, RKPR). Las 2 primeras ecuaciónes de estado SRK y PR, son ecuaciones clásicas y ampliamente utilizadas por la industria y la academia que cuentan con 2 parámetros (parámetro de atracción $a_C$ y de repulsión $b$) para describir el comportamiento de sustancias. Por otro lado, la ecuación de estado RKPR, la cual es una propuesta de una ecuación con un tercer parámetro $\delta_1$,el cual permite incluir el efecto estructural de la molécula de la sustancia a la que se quiere describir su comportamiento termodinámico.
2.1 Ecuaciones de estado
Step1: importar las linrerías requeridas, en este caso se trata de las librerías numpy, pandas junto con pyther
Step2: De esta forma se observa el calculo simple de los parámetros para la sustancia pura 3-METHYLHEPTANE_RKPR
A continuación se realiza el mismo tipo de calculo pero tomando una serie de 9 sustancias puras, que se pueden extender facilmente a n sustancias, para obtener sus parámetros de nuevo con la ecuación de estado RKPR.
Step3: Como se observa, los resultados obtenidos son organizados en un DataFrame permitiendo agilizar la manipulación de los datos de una serie de sustancias puras.
Step4: En el siguiente ejemplo se utiliza la ecuación RKPR pero esta vez con la especificación de la temperatura y densidad de líquido saturado para el CARBON DIOXIDE y de esta forma encontrar el valor del parámetro delta que verifica la especificación realizada para la densidad de líquido saturado. | Python Code:
import numpy as np
import pandas as pd
import pyther as pt
Explanation: 2. Parámetros de ecuaciones de estado cúbicas (SRK, PR, RKPR)
En esta sección se presenta una implementación en Python para calcular los parámetros de ecuaciones de estado cúbicas (SRK, PR, RKPR). Las 2 primeras ecuaciónes de estado SRK y PR, son ecuaciones clásicas y ampliamente utilizadas por la industria y la academia que cuentan con 2 parámetros (parámetro de atracción $a_C$ y de repulsión $b$) para describir el comportamiento de sustancias. Por otro lado, la ecuación de estado RKPR, la cual es una propuesta de una ecuación con un tercer parámetro $\delta_1$,el cual permite incluir el efecto estructural de la molécula de la sustancia a la que se quiere describir su comportamiento termodinámico.
2.1 Ecuaciones de estado: SRK y PR
Como se mncionó anteriormente en el caso de las ecuaciones de estado (SRK y PR) se tienen los parámetro de atracción $a_C$ y de repulsión $b$ que pueden ser calculados por medio de expresiones que relacionan constantes como temperatura crítica $T_c$, presión crítica $P_c$, volumen crítico $V_c$ y factor acéntrico $\omega$ de una sustancia pura ademas de la constate universal de los gases R.
2.1.1 Especificación de las constantes: $T_c$, $P_c$, $V_c$ y $\omega$
En el caso de especificar las constantes $T_c$, $P_c$, $V_c$ y $\omega$, es simple calcular los parámetros $a_c$, $b$ y $m$ por medio de las siguientes ecuaciones:
| Parámetros Ecuación de estado SRK | Parámetros Ecuación de estado PR |
| ---------- | ---------- |
| $ a_c = 0.077796070 \frac{R^2 T_c^2} {P_c}$ | $ a_c = 0.45723553 \frac{R^2 T_c^2} {P_c} $ |
| $ b_c = 0.086640 \frac{ R T_c} {P_c}$ | $ b_c = 0.077796070 \frac{R T_c} {P_c} $ |
| $ m = 0.480 + 1.574 \omega - 0.175 \omega^2$ | $m = 0.37464 + 1.54226 \omega - 0.26992 \omega ^2$ |
2.1.2 Especificación de los parámetros: $a_c$, $b$ y $m$
Ahora en el caso realizar una especificación para los valores de los parámetro de atracción $a_C$, de repulsión $b$ y $m$ para una sustancia pura, es simple obtener los valores correspondientes de las constantes $T_c$, $P_c$, $V_c$ y $\omega$
$$ T_c = \frac{\omega_b a_c} {\omega_a R b} $$
$$ P_c = \frac{\omega_b R T_c} {b} $$
$$ V_c = \frac{Z_c R T_c} {P_c} $$
En el caso del $ \omega$, se debe resolver una ecuación cuadratica que depende de unos determinados valores de constantes $c$, del parámetro $\delta_1$ y el parámetro $m$, que toman determinados valores para casa ecuación de estado:
$$ \omega = 0.5 \frac{- c_2 + \sqrt{c_2^2 - 4 c_1 c_3}}{2c_1} $$
| Ecuación de estado SRK | Ecuación de estado PR |
| ---------- | ---------- |
| $\delta_1 = 1.0$ | $\delta_1 = 1.0 + \sqrt{2.0}$ |
| $c_1 = -0.175$ | $c_1 = -0.26992$ |
| $c_2 = 1.574$ | $c_2 = 1.54226$ |
| $c_3 = 0.48 - m$ | $c_3 = 0.37464 - m$ |
2.2 Ecuación de estado RKPR
En el caso de la ecuación de estado RKPR, se tiene una posibilidad adicional por medio de utilizar el parámetro estructural $\delta_1$ para correlacionar el valor del factor de compresibilidad $Z_c$ con el támaño de la molécula de la sustancia pura que se está tratando. De esta forma, a las especificaciones que se pueden hacer con las ecuaciones de estado (SRK y PR), como en el caso de especificar las constantes como temperatura crítica $T_c$, presión crítica $P_c$, volumen crítico $V_c$ y factor acéntrico $\omega$ de una sustancia pura, se tiene 3 posibles situaciones adicionales:
La primera especificación corresponde a un valor del factor de compresibilidad crítico $Z_c$ y luego determinar el valor del parámetro $\delta_1$ que cumple con dicha especificación. Después se procede con el cálculo del parámetro $k$.
La segunda especificación corresponde a un valor del parámetro $\delta_1$ para el posterior cálculo del parámetro $k$.
La tercera opción es utilizar una correlación termodinámica para obtener un valor de la densidad de líquido saturado de una sustancia pura $\rho(T)_{sat}^{liq}$ y pasarlo como una especificación, para encontrar un valor de los parámetros $\delta_1$ y $k$, que cumplan con la imposición del valor de la densidad de líquido saturado.
<img src="\rkpr_paramters_latex.png">
Figura 1. Diagrama conceptual del calculo de parámetros ecuación RKPR
En la figura 1, los casos de Mode = 1, 2 y 3 corresponden a especificar las constantes ($T_c$, $P_c$, $\omega$) + alguna de las variables ($V_c$, $\delta_1$ , $\rho(T)_{sat}^{liq}$), mientras que el Mode = 4 se refiere a la especificación de los parámetros ($a_c$, $b$, $k$, $\delta_1$) y obtener el valor de las constantes ($T_c$, $P_c$, $V_c$, $\omega$). Este último cálculo se realiza de forma directa como en el caso de las ecuaciones (SRK y PR), por tanto, la siguiente breve explicación se centra en las 3 primeras opciones.
2.2.1 Especificación del parametro $\delta_1 $
La primera especificación corresponde a dar un valor del parámetro $\delta_1$, con est valor se calcula el factor de compresiblidad $Z_c$ por mediod e las siguientes ecuaciones:
$$d_1 = (1 + \delta_1 ^2) / (1 + \delta_1)$$
$$y = 1 + (2 (1 + \delta_1) ^ \frac{1} {3} + \left (\frac{4} {1 + \delta_1} \right)^ \frac{1} {3}$$
$$ \omega_a = \frac{(3 y ^2 + 3 y d_1 + d_1 ^ 2 + d_1 - 1)} {(3 y + d_1 - 1) ^ 2} $$
$$ \omega_b = \frac{1} {3 y + d_1 - 1} $$
$$ Z_c = \frac{y} {3 y + d_1 - 1} $$
en $A_0$
factor de compresibilidad crítico $Z_c$, determinado por las constantes ($T_c$, $P_c$, $V_c$):
$$ Z_c = \frac{P_c V_c}{R T_c}$$
para el posterior cálculo del parámetro $k$.
2.2.2 Especificación de las constantes: $T_c$, $P_c$, $V_c$ y $\omega$
La segunda especificación corresponde a un valor del factor de compresibilidad crítico $Z_c$, determinado por las constantes ($T_c$, $P_c$, $V_c$):
$$ Z_c = \frac{P_c V_c}{R T_c}$$
que para luego determinar el correspondiente valor del parámetro d1 que cumple con dicha especificación. Después se procede con el cálculo del parámetro k.
2.2.3 Especificación de un valor de densidad de líqudo saturado $\rho(T)_{sat}^{liq}$
La tercera opción es la especificación de un valor para la densidad de líquido saturado a una temperatura. En este caso se debe encontrar un valor d1 y el correspondiente valor del parámetro k, que permita cumplir con la imposición del valor de la densidad de líquido saturado. En este caso, se puede utilizar la clase Thermodynamic_correlations() para obtener un valor para la densidad de líquido saturado de una sustancia pura a una determinada temperatura y luego pasar este valor, como una especificación en la obtención de los parámetros de la ecuación RKPR.
En la figura 4 se muestran como variables de entrada las constantes Tc,Pc, w y alfa que puede ser una especificación de alguno de los 3 parámetros siguientes $(\delta_1, V_c, \rho(T)_{sat}^{liq})$.
La función F1 corresponde a la estimación de un valor para el parámetro {d1} de acuerdo a una correlación preestablecida en el caso de {alfa = Vc}.
La función F2 es el cálculo de los parámetros {ac} y {b} para el correspondiente valor del parámetro {d1}. En el caso de especificar el parámetro {alfa=d1}, el cálculo de los parámetros Zc, ac y b son directos y no requieren de iteración. Mientras que en el caso de alfa = {Vc} se requiere encontrar de forma iterativa el valor del parámetro d1 que verifique el valor de Zc correspondiente por medio del Vc previamente especificado. De manera similar se procede en el caso de {alfa = rho(T)_sat_liq}.
2.2.4 Especificación de los parámetros: $a_c, b, k, \delta_1$
En los ejemplos siguientes se utilizan los datos termodísicos de la base de datos DPPR. Para el caso se tiene como especificación la ecuación de estado RKPR y las constantes criticas para el componente 3-METHYLHEPTANE.
2.3
End of explanation
properties_data = pt.Data_parse()
component = "3-METHYLHEPTANE"
component = "METHANE"
component = "ETHANE"
component = "PROPANE"
component = "n-HEXATRIACONTANE"
NMODEL = "RKPR"
NMODEL = "PR"
ICALC = "constants_eps"
properties_component = properties_data.selec_component(component)
pt.print_properties_component(component, properties_component)
dinputs = np.array([properties_component[1]['Tc'], properties_component[1]['Pc'],
properties_component[1]['Omega'], properties_component[1]['Vc']])
component_eos = pt.models_eos_cal(NMODEL, ICALC, dinputs)
#ac = component_eos[0]
print(component_eos)
Explanation: importar las linrerías requeridas, en este caso se trata de las librerías numpy, pandas junto con pyther
End of explanation
properties_data = pt.Data_parse()
components = ["ISOBUTANE", "CARBON DIOXIDE", 'METHANE', "ETHANE", "3-METHYLHEPTANE", "n-PENTACOSANE",
"NAPHTHALENE", "m-ETHYLTOLUENE", "2-METHYL-1-HEXENE"]
NMODEL = "RKPR"
ICALC = "constants_eps"
component_eos_list = np.zeros((len(components),4))
for index, component in enumerate(components):
properties_component = properties_data.selec_component(component)
pt.print_properties_component(component, properties_component)
dinputs = np.array([properties_component[1]['Tc'], properties_component[1]['Pc'],
properties_component[1]['Omega'], properties_component[1]['Vc']])
component_eos = pt.models_eos_cal(NMODEL, ICALC, dinputs)
component_eos_list[index] = component_eos
components_table = pd.DataFrame(component_eos_list, index=components, columns=['ac', 'b', 'rm', 'del1'])
print(components_table)
Explanation: De esta forma se observa el calculo simple de los parámetros para la sustancia pura 3-METHYLHEPTANE_RKPR
A continuación se realiza el mismo tipo de calculo pero tomando una serie de 9 sustancias puras, que se pueden extender facilmente a n sustancias, para obtener sus parámetros de nuevo con la ecuación de estado RKPR.
End of explanation
components_table
Explanation: Como se observa, los resultados obtenidos son organizados en un DataFrame permitiendo agilizar la manipulación de los datos de una serie de sustancias puras.
End of explanation
properties_data = pt.Data_parse()
dppr_file = "PureFull.xls"
component = "CARBON DIOXIDE"
NMODEL = "RKPR"
ICALC = "density"
properties_component = properties_data.selec_component(dppr_file, component)
pt.print_properties_component(component, properties_component)
#dinputs = np.array([properties_component[1]['Tc'], properties_component[1]['Pc'],
# properties_component[1]['Omega'], properties_component[1]['Vc']])
T_especific = 270.0
RHOLSat_esp = 21.4626
# valor initial of delta_1
delta_1 = 1.5
dinputs = np.array([properties_component[1]['Tc'], properties_component[1]['Pc'],
properties_component[1]['Omega'], delta_1, T_especific, RHOLSat_esp])
component_eos = pt.models_eos_cal(NMODEL, ICALC, dinputs)
print(component_eos)
Explanation: En el siguiente ejemplo se utiliza la ecuación RKPR pero esta vez con la especificación de la temperatura y densidad de líquido saturado para el CARBON DIOXIDE y de esta forma encontrar el valor del parámetro delta que verifica la especificación realizada para la densidad de líquido saturado.
End of explanation |
561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean Variance - Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn Rate Tune - Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in differents font.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean Variance - Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
# features =
# labels =
# TODO: Set the weights and biases tensors
# weights =
# biases =
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
# epochs =
# learning_rate =
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn Rate Tune - Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NMT-Keras tutorial
2. Creating and training a Neural Translation Model
Now, we'll create and train a Neural Machine Translation (NMT) model. Since there is a significant number of hyperparameters, we'll use the default ones, specified in the config.py file. Note that almost every hardcoded parameter is automatically set from config if we run main.py.
We'll create the so-called 'GroundHogModel'. It is defined in the model_zoo.py file. See the neural_machine_translation.pdf for an overview of such system.
If you followed the notebook 1_dataset_tutorial.ipynb, you should have a dataset instance. Otherwise, you should follow that notebook first.
First, we'll make some imports, load the default parameters and load the dataset.
Step1: Since the number of words in the dataset may be unknown beforehand, we must update the params information according to the dataset instance
Step2: Now, we create a TranslationModel instance
Step3: Now, we must define the inputs and outputs mapping from our Dataset instance to our model
Step4: We can add some callbacks for controlling the training (e.g. Sampling each N updates, early stop, learning rate annealing...). For instance, let's build an Early-Stop callback. After each 2 epochs, it will compute the 'coco' scores on the development set. If the metric 'Bleu_4' doesn't improve during more than 5 checkings, it will stop. We need to pass some variables to the callback (in the extra_vars dictionary)
Step5: Now we are almost ready to train. We set up some training parameters...
Step6: And train! | Python Code:
from config import load_parameters
from model_zoo import TranslationModel
import utils
from keras_wrapper.cnn_model import loadModel
from keras_wrapper.dataset import loadDataset
from keras_wrapper.extra.callbacks import PrintPerformanceMetricOnEpochEndOrEachNUpdates
params = load_parameters()
dataset = loadDataset('datasets/Dataset_tutorial_dataset.pkl')
Explanation: NMT-Keras tutorial
2. Creating and training a Neural Translation Model
Now, we'll create and train a Neural Machine Translation (NMT) model. Since there is a significant number of hyperparameters, we'll use the default ones, specified in the config.py file. Note that almost every hardcoded parameter is automatically set from config if we run main.py.
We'll create the so-called 'GroundHogModel'. It is defined in the model_zoo.py file. See the neural_machine_translation.pdf for an overview of such system.
If you followed the notebook 1_dataset_tutorial.ipynb, you should have a dataset instance. Otherwise, you should follow that notebook first.
First, we'll make some imports, load the default parameters and load the dataset.
End of explanation
params['INPUT_VOCABULARY_SIZE'] = dataset.vocabulary_len['source_text']
params['OUTPUT_VOCABULARY_SIZE'] = dataset.vocabulary_len['target_text']
Explanation: Since the number of words in the dataset may be unknown beforehand, we must update the params information according to the dataset instance:
End of explanation
nmt_model = TranslationModel(params,
model_type='GroundHogModel',
model_name='tutorial_model',
vocabularies=dataset.vocabulary,
store_path='trained_models/tutorial_model/',
verbose=True)
Explanation: Now, we create a TranslationModel instance:
End of explanation
inputMapping = dict()
for i, id_in in enumerate(params['INPUTS_IDS_DATASET']):
pos_source = dataset.ids_inputs.index(id_in)
id_dest = nmt_model.ids_inputs[i]
inputMapping[id_dest] = pos_source
nmt_model.setInputsMapping(inputMapping)
outputMapping = dict()
for i, id_out in enumerate(params['OUTPUTS_IDS_DATASET']):
pos_target = dataset.ids_outputs.index(id_out)
id_dest = nmt_model.ids_outputs[i]
outputMapping[id_dest] = pos_target
nmt_model.setOutputsMapping(outputMapping)
Explanation: Now, we must define the inputs and outputs mapping from our Dataset instance to our model
End of explanation
extra_vars = {'language': 'en',
'n_parallel_loaders': 8,
'tokenize_f': eval('dataset.' + 'tokenize_none'),
'beam_size': 12,
'maxlen': 50,
'model_inputs': ['source_text', 'state_below'],
'model_outputs': ['target_text'],
'dataset_inputs': ['source_text', 'state_below'],
'dataset_outputs': ['target_text'],
'normalize': True,
'alpha_factor': 0.6,
'val': {'references': dataset.extra_variables['val']['target_text']}
}
vocab = dataset.vocabulary['target_text']['idx2words']
callbacks = []
callbacks.append(PrintPerformanceMetricOnEpochEndOrEachNUpdates(nmt_model,
dataset,
gt_id='target_text',
metric_name=['coco'],
set_name=['val'],
batch_size=50,
each_n_epochs=2,
extra_vars=extra_vars,
reload_epoch=0,
is_text=True,
index2word_y=vocab,
sampling_type='max_likelihood',
beam_search=True,
save_path=nmt_model.model_path,
start_eval_on_epoch=0,
write_samples=True,
write_type='list',
verbose=True))
Explanation: We can add some callbacks for controlling the training (e.g. Sampling each N updates, early stop, learning rate annealing...). For instance, let's build an Early-Stop callback. After each 2 epochs, it will compute the 'coco' scores on the development set. If the metric 'Bleu_4' doesn't improve during more than 5 checkings, it will stop. We need to pass some variables to the callback (in the extra_vars dictionary):
End of explanation
training_params = {'n_epochs': 100,
'batch_size': 40,
'maxlen': 30,
'epochs_for_save': 1,
'verbose': 0,
'eval_on_sets': [],
'n_parallel_loaders': 8,
'extra_callbacks': callbacks,
'reload_epoch': 0,
'epoch_offset': 0}
Explanation: Now we are almost ready to train. We set up some training parameters...
End of explanation
nmt_model.trainNet(dataset, training_params)
Explanation: And train!
End of explanation |
563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Morph surface source estimate
This example demonstrates how to morph an individual subject's
Step1: Setup paths
Step2: Load example data
Step3: Setting up SourceMorph for SourceEstimate
In MNE, surface source estimates represent the source space simply as
lists of vertices (see tut-source-estimate-class).
This list can either be obtained from
Step4: We also need to specify the set of vertices to morph to. This can be done
using the spacing parameter, but for consistency it's better to pass the
src_to parameter.
<div class="alert alert-info"><h4>Note</h4><p>Since the default values of
Step5: Apply morph to (Vector) SourceEstimate
The morph will be applied to the source estimate data, by giving it as the
first argument to the morph we computed above.
Step6: Plot results
Step7: As inflated surface
Step8: Reading and writing SourceMorph from and to disk
An instance of SourceMorph can be saved, by calling | Python Code:
# Author: Tommy Clausner <[email protected]>
#
# License: BSD (3-clause)
import os
import os.path as op
import mne
from mne.datasets import sample
print(__doc__)
Explanation: Morph surface source estimate
This example demonstrates how to morph an individual subject's
:class:mne.SourceEstimate to a common reference space. We achieve this using
:class:mne.SourceMorph. Pre-computed data will be morphed based on
a spherical representation of the cortex computed using the spherical
registration of FreeSurfer <tut-freesurfer-mne>
(https://surfer.nmr.mgh.harvard.edu/fswiki/SurfaceRegAndTemplates) [1]_. This
transform will be used to morph the surface vertices of the subject towards the
reference vertices. Here we will use 'fsaverage' as a reference space (see
https://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage).
The transformation will be applied to the surface source estimate. A plot
depicting the successful morph will be created for the spherical and inflated
surface representation of 'fsaverage', overlaid with the morphed surface
source estimate.
References
.. [1] Greve D. N., Van der Haegen L., Cai Q., Stufflebeam S., Sabuncu M.
R., Fischl B., Brysbaert M.
A Surface-based Analysis of Language Lateralization and Cortical
Asymmetry. Journal of Cognitive Neuroscience 25(9), 1477-1492, 2013.
<div class="alert alert-info"><h4>Note</h4><p>For background information about morphing see `ch_morph`.</p></div>
End of explanation
data_path = sample.data_path()
sample_dir = op.join(data_path, 'MEG', 'sample')
subjects_dir = op.join(data_path, 'subjects')
fname_src = op.join(subjects_dir, 'sample', 'bem', 'sample-oct-6-src.fif')
fname_fwd = op.join(sample_dir, 'sample_audvis-meg-oct-6-fwd.fif')
fname_fsaverage_src = os.path.join(subjects_dir, 'fsaverage', 'bem',
'fsaverage-ico-5-src.fif')
fname_stc = os.path.join(sample_dir, 'sample_audvis-meg')
Explanation: Setup paths
End of explanation
# Read stc from file
stc = mne.read_source_estimate(fname_stc, subject='sample')
Explanation: Load example data
End of explanation
src_orig = mne.read_source_spaces(fname_src)
print(src_orig) # n_used=4098, 4098
fwd = mne.read_forward_solution(fname_fwd)
print(fwd['src']) # n_used=3732, 3766
print([len(v) for v in stc.vertices])
Explanation: Setting up SourceMorph for SourceEstimate
In MNE, surface source estimates represent the source space simply as
lists of vertices (see tut-source-estimate-class).
This list can either be obtained from :class:mne.SourceSpaces (src) or from
the stc itself. If you use the source space, be sure to use the
source space from the forward or inverse operator, because vertices
can be excluded during forward computation due to proximity to the BEM
inner skull surface:
End of explanation
src_to = mne.read_source_spaces(fname_fsaverage_src)
print(src_to[0]['vertno']) # special, np.arange(10242)
morph = mne.compute_source_morph(stc, subject_from='sample',
subject_to='fsaverage', src_to=src_to,
subjects_dir=subjects_dir)
Explanation: We also need to specify the set of vertices to morph to. This can be done
using the spacing parameter, but for consistency it's better to pass the
src_to parameter.
<div class="alert alert-info"><h4>Note</h4><p>Since the default values of :func:`mne.compute_source_morph` are
``spacing=5, subject_to='fsaverage'``, in this example
we could actually omit the ``src_to`` and ``subject_to`` arguments
below. The ico-5 ``fsaverage`` source space contains the
special values ``[np.arange(10242)] * 2``, but in general this will
not be true for other spacings or other subjects. Thus it is recommended
to always pass the destination ``src`` for consistency.</p></div>
Initialize SourceMorph for SourceEstimate
End of explanation
stc_fsaverage = morph.apply(stc)
Explanation: Apply morph to (Vector) SourceEstimate
The morph will be applied to the source estimate data, by giving it as the
first argument to the morph we computed above.
End of explanation
# Define plotting parameters
surfer_kwargs = dict(
hemi='lh', subjects_dir=subjects_dir,
clim=dict(kind='value', lims=[8, 12, 15]), views='lateral',
initial_time=0.09, time_unit='s', size=(800, 800),
smoothing_steps=5)
# As spherical surface
brain = stc_fsaverage.plot(surface='sphere', **surfer_kwargs)
# Add title
brain.add_text(0.1, 0.9, 'Morphed to fsaverage (spherical)', 'title',
font_size=16)
Explanation: Plot results
End of explanation
brain_inf = stc_fsaverage.plot(surface='inflated', **surfer_kwargs)
# Add title
brain_inf.add_text(0.1, 0.9, 'Morphed to fsaverage (inflated)', 'title',
font_size=16)
Explanation: As inflated surface
End of explanation
stc_fsaverage = mne.compute_source_morph(stc,
subjects_dir=subjects_dir).apply(stc)
Explanation: Reading and writing SourceMorph from and to disk
An instance of SourceMorph can be saved, by calling
:meth:morph.save <mne.SourceMorph.save>.
This method allows for specification of a filename under which the morph
will be save in ".h5" format. If no file extension is provided, "-morph.h5"
will be appended to the respective defined filename::
>>> morph.save('my-file-name')
Reading a saved source morph can be achieved by using
:func:mne.read_source_morph::
>>> morph = mne.read_source_morph('my-file-name-morph.h5')
Once the environment is set up correctly, no information such as
subject_from or subjects_dir must be provided, since it can be
inferred from the data and use morph to 'fsaverage' by default. SourceMorph
can further be used without creating an instance and assigning it to a
variable. Instead :func:mne.compute_source_morph and
:meth:mne.SourceMorph.apply can be
easily chained into a handy one-liner. Taking this together the shortest
possible way to morph data directly would be:
End of explanation |
564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fractionating $2^k$ Factorial Designs
Motivation
The prior section showed an example of what an experimental design might like look like for 6 variables. However, this resulted in a $2^6 = 64$ experiment design campaign. This is potentially a major issue - if my experiments take 6 hours, and have to be staggered over working hours on weekdays, you're looking at almost 90 days turnaround time, assuming each experiment is carried out flawlessly. This is simply not a realistic view of experimentation.
In addition, we saw that a five-coefficient model captured nearly as much detail as a 64-coefficient model. By reducing the number of input variables we looked at, we turned certain experiments into replicates (because the only thing changed bewteen them were insignificant variables or variable combinations).
But we can halve or quarter our effort, and substantially improve our effectiveness in the lab, by carefully selecting experiments at each stage of the experiment to reveal a maximum amount of information, and avoiding as much as possible these kinds of duplicate experiments, through a fractional factorial design.
Step1: After re-casting the problem in a general form, we begin with the experimental design matrix. If we were to construct the full factorial for our $2^6$ factorial example, we would again have 64 rows in our experimental design matrix dataframe, corresponding to 64 experiments to run.
Step2: Design Matrix
Let's talk a bit more about the design matrix. Each column of the design matrix corresponds to a unique coded input variable value $(-1,+1)$. But each experiment also has a corresponding coded value for each two-variable interaction $x_i,x_j$, and for each three-variable interaction $x_k,x_m,x_n$, and so on.
These interactions are simply the product of each coded variable value. For example, if
$$
x_1 = -1 \
x_2 = +1 \
x_3 = +1
$$
then two-variable interaction effects can be computed as
Step3: The multi-variable columns can be used to fractionate our design.
Half Factorial
Suppose we pick a high-order interaction effect at random - e.g., $x_1 \times x_2 \times x_3 \times x_4$ - and assume it will be unimportant. Our assumption allows us to cut out any experiments that are intended to give us information about the effect of $x_1 x_2 x_3 x_4$.
For any two groups of experiments, if one group has
$$x_1 x_2 x_3 x_4 = +1$$
and the other group has
$$x_1 x_2 x_3 x_4 = -1$$
then based on our assumption that that interaction effect will be unimportant, one of those two groups can be thrown out.
Fortuitously, the first time a variable is eliminated, no matter which variable it is, the number of experiments is cut in half. Further eliminations of variables continue to cut the number of experiments in half. So a six-factor experimental design could be whittled down as follows
Step4: Costs and Benefits
The benefits are obvious - we've halved the number of experiments our experiment design requires. But at what cost?
The first 32 experiments, where $x_1 x_2 x_3 x_4 = +1$, give us information at a positive level of that input variable combination. To get information at a negative level of that input variable combination (i.e., $x_1 x_2 x_3 x_4 = -1$), we need 32 additional experiments.
Our assumption is that changing $x_1 x_2 x_3 x_4$ from high to low will have no effect on the observable $y$.
This also modifies the information we get about higher-order interaction effects. For example, we've assumed
Step5: Each of the dataframes above represents a different fractional factorial design.
$\frac{1}{4}$ Fractional Designs
To further reduce the number of experiments, two identities can be used. The number of experiments is cut in half for each identity. We already have one identity,
$$
I = x_1 x_2 x_3 x_4 = 1
$$
now let's define another one | Python Code:
import pandas as pd
import itertools
import numpy as np
import seaborn as sns
import pylab
import scipy.stats as stats
import statsmodels.api as sm
Explanation: Fractionating $2^k$ Factorial Designs
Motivation
The prior section showed an example of what an experimental design might like look like for 6 variables. However, this resulted in a $2^6 = 64$ experiment design campaign. This is potentially a major issue - if my experiments take 6 hours, and have to be staggered over working hours on weekdays, you're looking at almost 90 days turnaround time, assuming each experiment is carried out flawlessly. This is simply not a realistic view of experimentation.
In addition, we saw that a five-coefficient model captured nearly as much detail as a 64-coefficient model. By reducing the number of input variables we looked at, we turned certain experiments into replicates (because the only thing changed bewteen them were insignificant variables or variable combinations).
But we can halve or quarter our effort, and substantially improve our effectiveness in the lab, by carefully selecting experiments at each stage of the experiment to reveal a maximum amount of information, and avoiding as much as possible these kinds of duplicate experiments, through a fractional factorial design.
End of explanation
column_labs = ['x%d'%(i+1) for i in range(6)]
encoded_inputs = list( itertools.product([-1,1],[-1,1],[-1,1],[-1,1],[-1,1],[-1,1]) )
doe = pd.DataFrame(encoded_inputs,columns=column_labs)
print(len(doe))
Explanation: After re-casting the problem in a general form, we begin with the experimental design matrix. If we were to construct the full factorial for our $2^6$ factorial example, we would again have 64 rows in our experimental design matrix dataframe, corresponding to 64 experiments to run.
End of explanation
doe['x1-x2-x3-x4'] = doe.apply( lambda z : z['x1']*z['x2']*z['x3']*z['x4'] , axis=1)
doe['x4-x5-x6'] = doe.apply( lambda z : z['x4']*z['x5']*z['x6'] , axis=1)
doe['x2-x4-x5'] = doe.apply( lambda z : z['x2']*z['x4']*z['x5'] , axis=1)
doe[0:10]
Explanation: Design Matrix
Let's talk a bit more about the design matrix. Each column of the design matrix corresponds to a unique coded input variable value $(-1,+1)$. But each experiment also has a corresponding coded value for each two-variable interaction $x_i,x_j$, and for each three-variable interaction $x_k,x_m,x_n$, and so on.
These interactions are simply the product of each coded variable value. For example, if
$$
x_1 = -1 \
x_2 = +1 \
x_3 = +1
$$
then two-variable interaction effects can be computed as:
$$
x_{12} = -1 \times +1 = -1 \
x_{13} = -1 \times +1 = -1 \
x_{23} = +1 \times +1 = +1 \
$$
and three-variable interaction effects are:
$$
x_{123} = -1 \times -1 \times +1 = +1
$$
Now we can add new columns to our experimental design matrix dataframe, representing coded values for higher-order interaction effects:
End of explanation
print(len( doe[doe['x1-x2-x3-x4']==1] ))
Explanation: The multi-variable columns can be used to fractionate our design.
Half Factorial
Suppose we pick a high-order interaction effect at random - e.g., $x_1 \times x_2 \times x_3 \times x_4$ - and assume it will be unimportant. Our assumption allows us to cut out any experiments that are intended to give us information about the effect of $x_1 x_2 x_3 x_4$.
For any two groups of experiments, if one group has
$$x_1 x_2 x_3 x_4 = +1$$
and the other group has
$$x_1 x_2 x_3 x_4 = -1$$
then based on our assumption that that interaction effect will be unimportant, one of those two groups can be thrown out.
Fortuitously, the first time a variable is eliminated, no matter which variable it is, the number of experiments is cut in half. Further eliminations of variables continue to cut the number of experiments in half. So a six-factor experimental design could be whittled down as follows:
Six-factor, two-level experiment design:
* $n=2$, $k=6$, $2^6$ experimental design
* Full factorial: $2^6 = 64$ experiments
* Half factorial: $2^{6-1} = 32$ experiments
* $\frac{1}{4}$ Fractional factorial: $2^{6-2} = 16$ experiments
* $\frac{1}{8}$ Fractional factorial: $2^{6-3} = 8$ experiments
* $\frac{1}{16}$ Fractional factorial: $2^{6-4} = 4$ experiments
In general, for an $n^k$ experiment design ($n$ factor, $k$ level), a $\dfrac{1}{2^p}$ fractional factorial can be defined as:
$\dfrac{1}{2^p}$ Fractional factorial: $2^{n-p}$ experiments
Note that as the fractional factorial gets narrower, and the experiments get fewer, the number of aliased interaction effects gets larger, until not even interaction effects can be distinguished, but only main effects. (Screening designs, such as Plackett-Burman designs, are based on this idea of highly-fractionated experiment design; we'll get into that later.)
For now, let's look at the half factorial: 32 experiments, with the reduction in varaibles coming from aliasing the interaction effect $x_1 x_2 x_3 x_4$:
End of explanation
# Defining multiple DOE matrices:
# DOE 1 based on identity I = x1 x2 x3 x4
doe1 = doe[doe['x1-x2-x3-x4']==1]
# DOE 2 based on identity I = x4 x5 x6
doe2 = doe[doe['x4-x5-x6']==-1]
# DOE 3 based on identity I = x2 x4 x5
doe3 = doe[doe['x2-x4-x5']==-1]
doe1[column_labs].T
doe2[column_labs].T
doe3[column_labs].T
Explanation: Costs and Benefits
The benefits are obvious - we've halved the number of experiments our experiment design requires. But at what cost?
The first 32 experiments, where $x_1 x_2 x_3 x_4 = +1$, give us information at a positive level of that input variable combination. To get information at a negative level of that input variable combination (i.e., $x_1 x_2 x_3 x_4 = -1$), we need 32 additional experiments.
Our assumption is that changing $x_1 x_2 x_3 x_4$ from high to low will have no effect on the observable $y$.
This also modifies the information we get about higher-order interaction effects. For example, we've assumed:
$$
x_1 x_2 x_3 x_4 = +1
$$
We can use this identity to figure out what information we're missing when we cut out the 32 experiments. Our assumption about the fourth-order interaction also changes fifth- and sixth-order interactions:
$$
(x_1 x_2 x_3 x_4) = (+1) \
(x_1 x_2 x_3 x_4) x_5 = (+1) x_5 \
x_1 x_2 x_3 x_4 x_5 = x_5
$$
meaning the fifth-order interaction effect $x_1 x_2 x_3 x_4 x_5$ has been aliased with the first-order main effect $x_5$. This is a safe assumption since it is extremely unlikely that a fifth-order interaction effect could be confounded with a first-order main effect. We can derive other relations, using the fact that any factor squared is equivalent to $(+1)$, so that:
$$
(x_1 x_2 x_3 x_4) = +1 \
(x_1 x_2 x_3 x_4) x_1 = (+1) x_1 \
(x_1^2 x_2 x_3 x_4) = (+1) x_1 \
x_2 x_3 x_4 = x_1
$$
The sequence of variables selected as the interaction effect to be used as the experimental design basis is called the generator, and is denoted $I$:
$$
I = x_1 x_2 x_3 x_4
$$
and we set $I=+1$ or $I=-1$.
End of explanation
quarter_fractional_doe = doe[ np.logical_and( doe['x1-x2-x3-x4']==1, doe['x4-x5-x6']==1 ) ]
print("Number of experiments: %d"%(len(quarter_fractional_doe[column_labs])))
quarter_fractional_doe[column_labs].T
Explanation: Each of the dataframes above represents a different fractional factorial design.
$\frac{1}{4}$ Fractional Designs
To further reduce the number of experiments, two identities can be used. The number of experiments is cut in half for each identity. We already have one identity,
$$
I = x_1 x_2 x_3 x_4 = 1
$$
now let's define another one:
$$
I_2 = x_4 x_5 x_6 = 1
$$
Our resulting factorial matrix can be reduced the same way. In Python, we use the logical_and function to ensure our two conditions are satisfied.
End of explanation |
565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Chicago taxi fare training experience
This experiment using Scikit-learn Random Forest to train a ML model on Chicago taxi dataset to estimate taxi trip fare in a given time and start-end locations. Selected approach, feature engineering is based on https
Step2: Query dataset
Step3: Column info
Watch amount of null values in 'Non-Null Count column'
Step4: Raw descriptive statistics
Step5: Feature engineering
Step6: Remaining null values per column after feature engineering
Step7: Data profiling
(executing the next cell takes long time)
Step8: Visual dropoff locations
Step9: Location histograms
Step10: Time based explorations
Trip start distribution
Step11: Trip loginess
Step12: Fare by trip start hour
Step13: Split dataframe to examples and output
Step14: Training pipeline
Step15: Option 1
Step16: Option 2
Step17: Prediction test
Step18: Cross validation score to test set | Python Code:
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
from scipy import stats
from sklearn.ensemble import RandomForestRegressor
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
# MLflow
import mlflow
import mlflow.sklearn
# plotting libraries:
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
# Google clients
import google.auth
from google.cloud import bigquery
from google.cloud import bigquery_storage
# Set default appearance
# - overide maplot libs ugly colours.
# - default figure size
sns.set(color_codes=True)
mpl.rcParams['figure.figsize'] = [13, 8]
%matplotlib inline
BQ_DATASET = 'chicago_taxi_trips'
BQ_TABLE = 'taxi_trips'
BQ_QUERY =
with tmp_table as (
SELECT trip_seconds, trip_miles, fare, tolls,
company, pickup_latitude, pickup_longitude, dropoff_latitude, dropoff_longitude,
DATETIME(trip_start_timestamp, 'America/Chicago') trip_start_timestamp,
DATETIME(trip_end_timestamp, 'America/Chicago') trip_end_timestamp,
CASE WHEN (pickup_community_area IN (56, 64, 76)) OR (dropoff_community_area IN (56, 64, 76)) THEN 1 else 0 END is_airport,
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE
dropoff_latitude IS NOT NULL and
dropoff_longitude IS NOT NULL and
pickup_latitude IS NOT NULL and
pickup_longitude IS NOT NULL and
fare > 0 and
trip_miles > 0 and
MOD(ABS(FARM_FINGERPRINT(unique_key)), 100) {}
ORDER BY RAND()
LIMIT 20000)
SELECT *,
EXTRACT(YEAR FROM trip_start_timestamp) trip_start_year,
EXTRACT(MONTH FROM trip_start_timestamp) trip_start_month,
EXTRACT(DAY FROM trip_start_timestamp) trip_start_day,
EXTRACT(HOUR FROM trip_start_timestamp) trip_start_hour,
FORMAT_DATE('%a', DATE(trip_start_timestamp)) trip_start_day_of_week
FROM tmp_table
# Create BigQuery client
credentials, your_project_id = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform']
)
bqclient = bigquery.Client(credentials=credentials, project=your_project_id,)
bqstorageclient = bigquery_storage.BigQueryReadClient(credentials=credentials)
Explanation: Chicago taxi fare training experience
This experiment using Scikit-learn Random Forest to train a ML model on Chicago taxi dataset to estimate taxi trip fare in a given time and start-end locations. Selected approach, feature engineering is based on https://github.com/v-loves-avocados/chicago-taxi data exploration and analysis by Aradhana Chaturvedi.
End of explanation
df = (
bqclient.query(BQ_QUERY.format('between 0 and 99'))
.result()
.to_dataframe(bqstorage_client=bqstorageclient)
)
Explanation: Query dataset
End of explanation
display(df.info())
Explanation: Column info
Watch amount of null values in 'Non-Null Count column'
End of explanation
display(df.describe())
Explanation: Raw descriptive statistics
End of explanation
def feature_engineering(data):
# Add 'N/A' for missing 'Company'
data.fillna(value={'company':'N/A','tolls':0}, inplace=True)
# Drop rows contains null data.
data.dropna(how='any', axis='rows', inplace=True)
# Pickup and dropoff locations distance
data['abs_distance'] = (np.hypot(data['dropoff_latitude']-data['pickup_latitude'], data['dropoff_longitude']-data['pickup_longitude']))*100
# Remove extremes, outliers
possible_outliers_cols = ['trip_seconds', 'trip_miles', 'fare', 'abs_distance']
data=data[(np.abs(stats.zscore(data[possible_outliers_cols])) < 3).all(axis=1)].copy()
# Reduce location accuracy
data=data.round({'pickup_latitude': 3, 'pickup_longitude': 3, 'dropoff_latitude':3, 'dropoff_longitude':3})
return data
df=feature_engineering(df)
display(df.describe())
Explanation: Feature engineering
End of explanation
print(df.isnull().sum().sort_values(ascending=False))
Explanation: Remaining null values per column after feature engineering
End of explanation
ProfileReport(df, title='Chicago taxi dataset profiling Report').to_notebook_iframe()
Explanation: Data profiling
(executing the next cell takes long time)
End of explanation
sc = plt.scatter(df.dropoff_longitude, df.dropoff_latitude, c = df['fare'], cmap = 'summer')
plt.colorbar(sc)
Explanation: Visual dropoff locations
End of explanation
fig, axs = plt.subplots(2)
fig.suptitle('Pickup location histograms')
df.hist('pickup_longitude', bins=100, ax=axs[0])
df.hist('pickup_latitude', bins=100, ax=axs[1])
plt.show()
fig, axs = plt.subplots(2)
fig.suptitle('Dropoff location histograms')
df.hist('dropoff_longitude', bins=100, ax=axs[0])
df.hist('dropoff_latitude', bins=100, ax=axs[1])
plt.show()
Explanation: Location histograms
End of explanation
fig, axs = plt.subplots(4)
fig.suptitle('Trip start histograms')
fig.set_size_inches(18, 12, forward=True)
df.hist('trip_start_year', bins=8, ax=axs[0], )
df.hist('trip_start_month', bins=12, ax=axs[1])
df.hist('trip_start_day', bins=31, ax=axs[2])
df.hist('trip_start_hour', bins=24, ax=axs[3])
plt.show()
Explanation: Time based explorations
Trip start distribution
End of explanation
fig, axs = plt.subplots(2)
fig.set_size_inches(18, 8, forward=True)
df.hist('trip_miles', bins=50, ax=axs[0])
df.hist('trip_seconds', bins=50, ax=axs[1])
plt.show()
Explanation: Trip loginess
End of explanation
display(df.groupby('trip_start_hour')['fare'].mean().plot())
Explanation: Fare by trip start hour
End of explanation
# Drop complex fields and split dataframe to examples and output
mlflow.log_param('training_shape', f'{df.shape}')
X=df.drop(['trip_start_timestamp'],axis=1)
y=df['fare']
Explanation: Split dataframe to examples and output
End of explanation
# global variables
experiment_name = 'chicago-taxi-1'
ct_pipe = ColumnTransformer(transformers=[
('hourly_cat', OneHotEncoder(categories=[range(0,24)], sparse = False), ['trip_start_hour']),
('dow', OneHotEncoder(categories=[['Mon', 'Tue', 'Sun', 'Wed', 'Sat', 'Fri', 'Thu']], sparse = False), ['trip_start_day_of_week']),
('std_scaler', StandardScaler(), [
'trip_start_year',
'abs_distance',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'trip_miles',
'trip_seconds'])
])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=123)
X_train=X_train.drop('fare', axis=1)
# for more details: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
rfr_pipe = Pipeline([
('ct', ct_pipe),
('forest_reg', RandomForestRegressor(
n_estimators = 20,
max_features = 'auto',
n_jobs = -1,
random_state = 3,
max_depth=None,
max_leaf_nodes=None,
))
])
rfr_score = cross_val_score(rfr_pipe, X_train, y_train, scoring = 'neg_mean_squared_error', cv = 5)
rfr_rmse = np.sqrt(-rfr_score)
rfr_rmse.mean()
mlflow.log_metric('train_cross_valid_score_rmse_mean', np.sqrt(-rfr_score).mean())
mlflow.log_param('number_of_estimators', 20)
Explanation: Training pipeline
End of explanation
# To see all RandomForestRegressor hyper parameters:
# estimator=RandomForestRegressor()
# display(estimator.get_params())
# Train model
mlflow.set_experiment('chicago-taxi-0')
# mlflow.sklearn.autolog()
with mlflow.start_run(nested=True) as mlflow_run:
final_model=rfr_pipe.fit(X_train, y_train)
mlflow.sklearn.log_model(final_model, 'chicago_rnd_forest')
Explanation: Option 1: Simple training
(~fast)
End of explanation
param_grid = {'forest_reg__n_estimators': [5, 250], 'forest_reg__max_features': [6, 16, 'auto']}
forest_gs = GridSearchCV(rfr_pipe, param_grid, cv = 5, scoring = 'neg_mean_squared_error', n_jobs = -1)
forest_gs.fit(X_train, y_train)
print(f'Best parameters: {forest_gs.best_params_}')
print(f'Best score: {np.sqrt(-forest_gs.best_score_)}')
print(f'(All scores: {np.sqrt(-forest_gs.cv_results_['mean_test_score'])})')
final_model=forest_gs.best_estimator_
Explanation: Option 2: Parameter search + training
(time consuming)
End of explanation
X_pred = pd.DataFrame(X_test, columns=X_test.columns)
X_pred['fare_pred'] = final_model.predict(X_test.drop('fare',axis=1))
X_pred.head(5)
Explanation: Prediction test
End of explanation
rfr_score = cross_val_score(final_model, X_test, y_test, scoring='neg_mean_squared_error', cv = 5)
rfr_rmse = np.sqrt(-rfr_score)
rfr_rmse.mean()
mlflow.log_metric('eval_cross_valid_score_rmse_mean', np.sqrt(-rfr_score).mean())
# Comparer test
def model_comparer(job_name, **kwargs):
print(f'Model blessing: "{job_name}"')
experiment = mlflow.get_experiment_by_name(experiment_name)
filter_string = f"tags.job_name ILIKE '{job_name}_%'"
df = mlflow.search_runs([experiment.experiment_id], filter_string=filter_string)
display(df)
# Compare
# Available columns:
# run_id experiment_id status artifact_uri start_time end_time metrics.train_cross_valid_score_rmse_mean params.number_of_estimators tags.job_name tags.mlflow.source.name tags.mlflow.user tags.mlflow.source.type tags.version
eval_max = df.loc[df['metrics.eval_cross_valid_score_rmse_mean'].idxmax()]
train_max= df.loc[df['metrics.train_cross_valid_score_rmse_mean'].idxmax()]
display(eval_max)
return eval_max
# You need to set a previous training job name manually. Which is following this naming pattern: training_job_...time stamp...
best_run = model_comparer('training_job_20210119T220534')
client = mlflow.tracking.MlflowClient()
def register_model(run_id, model_name):
model_uri = f'runs:/{run_id}/{model_name}'
registered_model = mlflow.register_model(model_uri, model_name)
print(registered_model)
registered_models=client.search_registered_models(filter_string=f"name='{experiment_name}'", max_results=1, order_by=['timestamp DESC'])
if len(registered_models) ==0:
register_model(best_run.run_id, experiment_name)
else:
last_version = registered_models[0].latest_versions[0]
run = client.get_run(last_version.run_id)
print(run)
if not run:
print(f'Registered version run missing!')
last_eval_metric=run.data.metrics['eval_cross_valid_score_rmse_mean']
best_run_metric=best_run['metrics.eval_cross_valid_score_rmse_mean']
if last_eval_metric<best_run_metric:
print(f'Register better version with metric: {best_run_metric}')
register_model(best_run.run_id, experiment_name)
else:
print(f'Registered version still better. Metric: {last_eval_metric}')
Explanation: Cross validation score to test set
End of explanation |
566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We start off with a single import statement. Nice! Note the TensorFlow backend...
Step1: Great, now I have a model.
Let's so something with it, like build a 34-layer residual network. | Python Code:
model = KerasGraphModel()
Explanation: We start off with a single import statement. Nice! Note the TensorFlow backend...
End of explanation
model.build_residual_network()
model.graph.summary()
from data_preparation.image_preparation import ImageLoader
from pathlib import Path
image_loader = ImageLoader()
im_files = list(Path('data/train_photos').glob('*[0-9].jpg'))
train_im_func = image_loader.graph_train_generator(im_files)
# Note: no validation data yet
# You could take a look at the dictionary if you want
# test_train_dict = next(train_im_func)
# {'input': train_tensor, 'output': target_tensor}
# Fit on 5 mini-batches of 200 samples for 3 epochs
model.graph.fit_generator(train_im_func, 200*5, 3)
# TODO: model.generate_submission()
Explanation: Great, now I have a model.
Let's so something with it, like build a 34-layer residual network.
End of explanation |
567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experimental data assessment and model parameters optimisation
Data preparation
The first step to generate three-dimensional (3D) models of a specific genomic regions is to filter columns with low counts and with no diagonal count in order to remove outliers or problematic columns from the interaction matrix. The particles associated with the filtered columns will be modelled, but will have no experimental data applied.
Here we load the data previous data already normalised.
Step1: Load raw data matrices, and normalized matrices
Step2: It is a good practice to check that the data is there
Step3: Focus on the genomic region to model.
Step4: Data modellability assessment via MMP score
We can use the Matrix Modeling Potential (MMP) score (Trussart M. et al. Nature Communication, 2017) to identify a priori whether the interaction matrices have the potential of being use for modeling. The MMP score ranges from 0 to 1 and combines three different measures
Step5: Data Transformation and scoring function
This step is automatically done in TADbit.
A a weight is generated for each pair of interactions proportional to their interaction count as in formula
Step6: Refine optimization in a small region
Step7: For the other replicate, we can reduce the space of search | Python Code:
from pytadbit import load_chromosome
from pytadbit.parsers.hic_parser import load_hic_data_from_bam
crm = load_chromosome('results/fragment/chr3.tdb')
B, PSC = crm.experiments
B, PSC
Explanation: Experimental data assessment and model parameters optimisation
Data preparation
The first step to generate three-dimensional (3D) models of a specific genomic regions is to filter columns with low counts and with no diagonal count in order to remove outliers or problematic columns from the interaction matrix. The particles associated with the filtered columns will be modelled, but will have no experimental data applied.
Here we load the data previous data already normalised.
End of explanation
base_path = 'results/fragment/{0}_both/03_filtering/valid_reads12_{0}.bam'
bias_path = 'results/fragment/{0}_both/04_normalizing/biases_{0}_both_{1}kb.biases'
reso = 100000
chrname = 'chr3'
cel1 = 'mouse_B'
cel2 = 'mouse_PSC'
hic_data1 = load_hic_data_from_bam(base_path.format(cel1),
resolution=reso,
region='chr3',
biases=bias_path.format(cel1, reso // 1000),
ncpus=8)
hic_data2 = load_hic_data_from_bam(base_path.format(cel2),
resolution=reso,
region='chr3',
biases=bias_path.format(cel2, reso // 1000),
ncpus=8)
B.load_hic_data([hic_data1.get_matrix(focus='chr3')])
B.load_norm_data([hic_data1.get_matrix(focus='chr3', normalized=True)])
PSC.load_hic_data([hic_data2.get_matrix(focus='chr3')])
PSC.load_norm_data([hic_data2.get_matrix(focus='chr3', normalized=True)])
Explanation: Load raw data matrices, and normalized matrices
End of explanation
crm.visualize(['mouse_B', 'mouse_PSC'], normalized=True, paint_tads=True)
Explanation: It is a good practice to check that the data is there:
End of explanation
crm.visualize(['mouse_B', 'mouse_PSC'], normalized=True, paint_tads=True, focus=(300, 360))
Explanation: Focus on the genomic region to model.
End of explanation
from pytadbit.utils.three_dim_stats import mmp_score
mmp_score(hic_data1.get_matrix(focus='chr3:30000000-36000000'),
savefig='results/fragment/{0}_both/mmp_score.png'.format(cel1))
Explanation: Data modellability assessment via MMP score
We can use the Matrix Modeling Potential (MMP) score (Trussart M. et al. Nature Communication, 2017) to identify a priori whether the interaction matrices have the potential of being use for modeling. The MMP score ranges from 0 to 1 and combines three different measures: the contribution of the significant eigenvectors, the skewness and the kurtosis of the distribution of Z-scores.
End of explanation
opt_B = B.optimal_imp_parameters(start=300, end=360, n_models=40, n_keep=20, n_cpus=8,
upfreq_range=(0, 0.6, 0.3),
lowfreq_range=(-0.9, 0, 0.3),
maxdist_range=(1000, 2000, 500),
dcutoff_range=[2, 3, 4])
opt_B.plot_2d(show_best=10)
Explanation: Data Transformation and scoring function
This step is automatically done in TADbit.
A a weight is generated for each pair of interactions proportional to their interaction count as in formula:
$$weight(I, J) = \frac{\sum^N_{i=0}{\sum^N_{j=0}{(matrix(i, j))}}}{\sum^N_{i=0}{(matrix(i, J))} \times \sum^N_{j=0}{(matrix(I, j))}}$$
The raw data are then multiplied by this weight. In the case that multiple experiments are used, the weighted interaction values are normalised using a factor (default set as 1) in order to compare between experiments.
Then, a Z-score of the off-diagonal normalised/weighted interaction is calculated as in formula:
$$zscore(I, J) = \frac{log_{10}(weight(I, J) \times matrix(I, J)) - mean(log_{10}(weight \times matrix))}{stddev(log_{10}(weight \times matrix))}$$
The Z-scores are then transformed to distance restraints. To define the type of restraints between each pair of particles. we need to identified empirically three optimal parameters (i) a maximal distance between two non-interacting particles (maxdist), (ii) a lower-bound cutoff to define particles that do not interact frequently (lowfreq) and (iii) an upper-bound cutoff to define particles that do interact frequently (upfreq). In TADbit this is done via a grid search approach.
The following picture shows the different component of the scoring funtion that is optimised during the Monte Carlo simulated annealing sampling protocol.
Two consecutive particles are spatially restrained by a harmonic oscillator with an equilibrium distance that corresponds to the sum of their radii. Non-consecutive particles with contact frequencies above the upper-bound cutoff are restrained by a harmonic oscillator at an equilibrium distance, while those below the lower-bound cutoff are maintained further than an equilibrium distance by a lower bound harmonic oscillator.
Optimization of parameters
We need to identified empirically (via a grid-search optimisation) the optimal parameters for the mdoelling procedure:
maxdist: maximal distance assosiated two interacting particles.
upfreq: to define particles that do interact frequently (defines attraction)
lowfreq: to define particles that do not interact frequently ( defines repulsion)
dcutoff: the definition of "contact" in units of bead diameter. Value of 2 means that a contact will occur when 2
beads are closer than 2 times their diameter. This will be used to compare 3D models with Hi-C interaction maps.
Pairs of beads interacting less than lowfreq (left dashed line) are penalized if they are closer than their assigned minimum distance (Harmonic lower bound).
Pairs of beads interacting more than ufreq (right dashed line) are penalized if they are further apart than their assigned maximum distance (Harmonic upper bound).
Pairs of beads which interaction fall in between lowfreq and upfreq are not penalized except if they are neighbours (Harmonic)
In the parameter optimization step we are going to give a set of ranges for the different search parameters. For each possible combination TADbit will produce a set of models.
In each individual model we consider that two beads are in contact if their distance in 3D space is lower than the specified distance cutoff. TADbit builds a cumulative contact map for each set of models as shown in the schema below. The contact map is then compared with the Hi-C interaction experiment by means of a Spearman correlation coefficient. The sets having higher correlation coefficients are those that best represents the original data.
End of explanation
opt_B.run_grid_search(upfreq_range=(0, 0.3, 0.3), lowfreq_range=(-0.9, -0.3, 0.3),
maxdist_range=[1750],
dcutoff_range=[2, 3],
n_cpus=8)
opt_B.plot_2d(show_best=5)
opt_B.run_grid_search(upfreq_range=(0, 0.3, 0.3), lowfreq_range=(-0.3, 0, 0.1),
maxdist_range=[2000, 2250],
dcutoff_range=[2],
n_cpus=8)
opt_B.plot_2d(show_best=5)
opt_B.run_grid_search(upfreq_range=(0, 0.3, 0.1), lowfreq_range=(-0.3, 0, 0.1),
n_cpus=8,
maxdist_range=[2000, 2250],
dcutoff_range=[2])
opt_B.plot_2d(show_best=5)
opt_B.get_best_parameters_dict()
Explanation: Refine optimization in a small region:
End of explanation
opt_PSC = PSC.optimal_imp_parameters(start=300, end=360, n_models=40, n_keep=20, n_cpus=8,
upfreq_range=(0, 0.3, 0.1),
lowfreq_range=(-0.3, -0.1, 0.1),
maxdist_range=(2000, 2250, 250),
dcutoff_range=[2])
opt_PSC.plot_2d(show_best=5)
opt_PSC.get_best_parameters_dict()
Explanation: For the other replicate, we can reduce the space of search:
End of explanation |
568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encoding Categorical Data
Step1: Simple data frame with categorical data
Represent each category as an integer. Trouble is, the meaning of each integer is specific to each feature, so the 1 in 'feature 1' does not mean the same thing as the one in 'feature 2'.
Step2: Also create a dataframe of strings to make this a little more intuitive
Step3: Label Encoder
Step4: Transform categories into integers in one go
Not very convenient because we need to unravel the dataframe values. Does not generalise to cases where the data contains non-categorical values.
Step5: Transform categories into integers using LabelEncoder
Step6: Note that the Label Encoder is not appropriate for regressions and similar techniques that compute the distance between samples. For example, the distance between 'red' and 'blue' is 3 in our case, whereas the distance between 'purple' and 'red' is 1. This would have an 'unphysical' effect on regression models. To avoid this, use the One Hot Encoder. The drawback of the one hot encoder is that it increases the number of features.
Some algorithms, such as decision trees (e.g. random forests), do not use the pairwise distance so can be used in combination with Label Encoder.
See http
Step7: The categories in each column are mapped using the feature_indices_ attribute
The categories in column i are mapped to range(feature_indices_[i], feature_indices_[i+1])
Step8: Each categorical feature is mapped to multiple boolean columns
Step9: So our feature 1 will be transformed into two columns of booleans, (0 or 1), our feature 2 into 3 columns, and our feature 3 into 4 columns. The new columns are listed in the active_features_ attribute of our encoder
Step10: Transforming samples
This shows how a single sample in our original dataset is transformed into a new sample by our OneHot encoder.
Step11: Transforming multiple samples | Python Code:
import pandas as pd
import numpy as np
Explanation: Encoding Categorical Data
End of explanation
data = pd.DataFrame(data=[[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]], columns=['feature 1', 'feature 2', 'feature 3'])
data
Explanation: Simple data frame with categorical data
Represent each category as an integer. Trouble is, the meaning of each integer is specific to each feature, so the 1 in 'feature 1' does not mean the same thing as the one in 'feature 2'.
End of explanation
gender = 'male', 'female'
country = 'France', 'UK', 'Germany'
color = 'blue', 'red', 'green', 'purple'
df = data.copy()
for i, category in enumerate([gender, country, color]):
df.iloc[:,i] = data.iloc[:,i].apply(lambda j: category[j])
df.columns = ['gender', 'country', 'color']
df
Explanation: Also create a dataframe of strings to make this a little more intuitive
End of explanation
from sklearn.preprocessing import LabelEncoder
Explanation: Label Encoder
End of explanation
le = LabelEncoder()
le.fit(gender + country + color)
print(le.classes_)
values_t = le.transform(df.values.ravel()).reshape(df.shape)
values_t
df_t = pd.DataFrame(data=values_t, columns=[c + '(int)' for c in df.columns])
df_t
Explanation: Transform categories into integers in one go
Not very convenient because we need to unravel the dataframe values. Does not generalise to cases where the data contains non-categorical values.
End of explanation
labenc_lst = []
df_t2 = df.copy()
for category in df.columns:
le2 = LabelEncoder()
df_t2[category] = le2.fit_transform(df[category])
labenc_lst.append(le2)
df_t2
Explanation: Transform categories into integers using LabelEncoder
End of explanation
enc.n_values_
Explanation: Note that the Label Encoder is not appropriate for regressions and similar techniques that compute the distance between samples. For example, the distance between 'red' and 'blue' is 3 in our case, whereas the distance between 'purple' and 'red' is 1. This would have an 'unphysical' effect on regression models. To avoid this, use the One Hot Encoder. The drawback of the one hot encoder is that it increases the number of features.
Some algorithms, such as decision trees (e.g. random forests), do not use the pairwise distance so can be used in combination with Label Encoder.
See http://stackoverflow.com/questions/17469835/one-hot-encoding-for-machine-learning for more discussion.
One Hot Encoder
Transforms a feature with N integer categories into N boolean category features (does this sample belong to this category or not?).
We can get the number of categories in each column
Thus, we see that we've got 9 different categories, so that our 4x3 dataset is actually a 4x9 dataset, where each feature is represented as a boolean (0 or 1).
End of explanation
enc.feature_indices_
Explanation: The categories in each column are mapped using the feature_indices_ attribute
The categories in column i are mapped to range(feature_indices_[i], feature_indices_[i+1])
End of explanation
mapping = {data.columns[i]: list(range(enc.feature_indices_[i], enc.feature_indices_[i+1]))
for i in range(data.shape[1])}
mapping
Explanation: Each categorical feature is mapped to multiple boolean columns
End of explanation
enc.active_features_
Explanation: So our feature 1 will be transformed into two columns of booleans, (0 or 1), our feature 2 into 3 columns, and our feature 3 into 4 columns. The new columns are listed in the active_features_ attribute of our encoder
End of explanation
def make_dataframe(sample, columns, **kwargs):
return pd.DataFrame(data=sample, columns=columns, **kwargs)
original_features = 'feature 1', 'feature 2', 'feature 3'
new_features = ['category ' + str(i) for i in enc.active_features_]
x1 = make_dataframe([[0, 0, 0]], original_features)
x1
x1_t = enc.transform(x1)
make_dataframe(x1_t, new_features)
make_dataframe(x1_t, new_features, dtype='bool')
x2 = make_dataframe([[1,1,1]], original_features)
x2
x2_t = make_dataframe(enc.transform(x2), new_features)
x2_t
Explanation: Transforming samples
This shows how a single sample in our original dataset is transformed into a new sample by our OneHot encoder.
End of explanation
data_t = make_dataframe(enc.transform(data), new_features, dtype=bool)
import matplotlib.pyplot as plt
plt.spy(data_t)
Explanation: Transforming multiple samples
End of explanation |
569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding the Root (Zero) of a Function
Finding the root, or zero, of a function is a very common task in exploratory computing. This Notebook presents the Bisection method and Newton's method for finding the root, or 0, of a function.
Step1: Bisection method
Given a continuous function $f(x)$ and two values of $x_1$, $x_2$ such that $f(x_1)$ and $f(x_2)$ have opposite signs the Bisection method is a root-finding method that repeatedly bisects the interval $[x_1, x_2]$ and then selects a subinterval (in which a root must be) for further processing. (Since $f(x_1)$ and $f(x_2)$ have opposite signs, it follows that $f(x)$ is zero somewhere between $x_1$ and $x_2$.) The Bisection method iterate towards the zero of the function by cutting the root search interval in half at every iteration. The method calculates the middle point $x_m$ between $x_1$ and $x_2$ and compute $f(x_m)$ and then replaces either $x_1$ or $x_2$ by $x_m$ such the values of $f$ at the end points of the interval are of opposite signs. The process is repeated until the interval is small enough that its middle point can be considered a good approximation of the root of the function. In summary, the algorithm works as follows
Step2: Implementation of the Bisection method
We implement the bisection method as a function called bisection which takes as arguments
Step3: We use the bisection method to find the root of the $exponential_function$ defined above
Step4: and of $cos$ between 0 and 3.
Step5: Newton's method
The Bisection method is a brute-force method guaranteed to find a root of a continuous function $f$ on an interval $(x_1,x_2)$, if $(x_1,x_2)$ contains a root for $f$. The Bisection method is not very efficient and it requires a search interval that contains only one root.
An alternative is Newton's method (also called the Newton-Raphson method). Consider the graph below. To find the root of the function represented by the blue line, Newton's method starts at a user-defined starting location, $x_0$ (the blue dot) and fits a straight line through the point $(x,y)=(x_0,f(x_0))$ in such a way that the line is tangent to $f(x)$ at $x_0$ (the red line). The intersection of the red line with the horizontal axis is the next estimate $x_1$ of the root of the function (the red dot). This process is repeated until a value of $f(x)$ is found that is sufficiently close to zero (within a specified tolerance), i.e., a straight line is fitted through the point $(x,y)=(x_1,f(x_1))$, tangent to the function, and the the next estimate of the root of the function is taken as the intersection of this line with the horizontal axis, until the value of f at the root estimate is very close to 0.
Unfortunately, not guaranteed that it always works, as is explained below.
<img src="http
Step6: We test newtonsmethod by finding the root of $f(x)=\frac{1}{2}-\text{e}^{-x}$ using $x_0=1$ as the starting point of the search. How many iterations do we need if we start at $x=4$?
Step7: We also demonstrate how newton works by finding the zero of $\sin(x)$, which has many roots
Step8: Root finding methods in scipy
The package scipy.optimize includes a number of routines for the minimization of a function and for finding the zeros of a function. Among them, bisect, newton, and fsolve. fsolve has the additional advantage of also estimating the derivative of the function. fsolve can be used to find an (approximate) answer for a system of non-linear equations.
fsolve
We demonstrate how to use thefsolve method of the scipy.optimize package by finding the value for which $\ln(x^2)=2$
Step9: Plotting the root
We plot the function $f(x)=x+2\cos(x)$ for $x$ going from -2 to 4, and on the same graph, we also plot a red dot at the location where $f(x)=0$. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Finding the Root (Zero) of a Function
Finding the root, or zero, of a function is a very common task in exploratory computing. This Notebook presents the Bisection method and Newton's method for finding the root, or 0, of a function.
End of explanation
def exponential_function(x):
return 0.5 - np.exp(-x)
x = np.linspace(0, 4, 100)
y = exponential_function(x)
plt.plot(x, y)
plt.axhline(0, color='r', ls='--')
Explanation: Bisection method
Given a continuous function $f(x)$ and two values of $x_1$, $x_2$ such that $f(x_1)$ and $f(x_2)$ have opposite signs the Bisection method is a root-finding method that repeatedly bisects the interval $[x_1, x_2]$ and then selects a subinterval (in which a root must be) for further processing. (Since $f(x_1)$ and $f(x_2)$ have opposite signs, it follows that $f(x)$ is zero somewhere between $x_1$ and $x_2$.) The Bisection method iterate towards the zero of the function by cutting the root search interval in half at every iteration. The method calculates the middle point $x_m$ between $x_1$ and $x_2$ and compute $f(x_m)$ and then replaces either $x_1$ or $x_2$ by $x_m$ such the values of $f$ at the end points of the interval are of opposite signs. The process is repeated until the interval is small enough that its middle point can be considered a good approximation of the root of the function. In summary, the algorithm works as follows:
Compute $f(x_1)$ and $f(x_2)$
Compute $x_m = \frac{1}{2}(x_1 + x_2)$.
Compute $f(x_m)$.
If $f(x_m)f(x_2) < 0$, replace $x_1$ by $x_m$, otherwise, replace $x_2$ by $x_m$.
If $|x_1 - x_2|<\varepsilon$, where $\varepsilon$ is a user-specified tolerance, return $\frac{1}{2}(x_1 + x_2)$, otherwise return to step 2.
Example: let $f(x)$ be $\frac{1}{2}-\text{e}^{-x}$ and $x_1$ and $x_2$ be 0 and 4, respectively. Notice that $f(x)$ has a zero somewhere on the plotted interval.
End of explanation
def bisection(func, x1, x2, tol=1e-3, nmax=10, silent=True):
f1 = func(x1)
f2 = func(x2)
assert f1 * f2< 0, 'Error: zero not in interval x1-x2'
for i in range(nmax):
xm = 0.5*(x1 + x2)
fm = func(xm)
if fm * f2 < 0:
x1 = xm
f1 = fm
else:
x2 = xm
f2 = fm
if silent is False: print(x1, x2, f1, f2)
if abs(x1 - x2) < tol:
break
if abs(func(x1)) > tol:
print('Maximum number of iterations reached')
return x1
Explanation: Implementation of the Bisection method
We implement the bisection method as a function called bisection which takes as arguments:
The function for which we want to find the root.
$x_1$ and $x_2$
The tolerance tol to be used as a stopping criterion (by default 0.001).
The maximum number of iterations nmax. Make nmax a keyword argument with a default value of, for example, 10.
Our function returns the value of $x$ where $f(x)$ is (approximately) zero, or print a warning if the maximum number of iterations is reached before the tolerance is met.
Steps 2-5 of the algorithm explained above are implemented as a loop to be run until the tolerance level is met, at most nmax times.
End of explanation
x1 = 0
x2 = 4
function = exponential_function
xzero = bisection(function, x1, x2, tol=1e-3, nmax=20, silent=True)
print ("The root of exponential_function between %.2f and %.2f is %f" % (x1, x2, xzero))
print ("The value of the function at the 'root' is %f" % exponential_function(xzero))
Explanation: We use the bisection method to find the root of the $exponential_function$ defined above
End of explanation
x1 = 0
x2 = 3
function = np.cos
root = bisection(function, 0, 3, tol=1e-6, nmax=30)
print ("The root of cos between %.2f and %.2f is %f" % (x1, x2, root))
Explanation: and of $cos$ between 0 and 3.
End of explanation
def newtonsmethod(func, funcp, xs, tol=1e-6, nmax=10, silent=True):
f = func(xs)
for i in range(nmax):
fp = funcp(xs)
xs = xs - f/fp
f = func(xs)
if silent is False: print(xs, func(xs))
if abs(f) < tol:
return (xs,i+1)
break
if abs(f) > tol:
#print('Max number of iterations reached before convergence')
return (None, -1)
Explanation: Newton's method
The Bisection method is a brute-force method guaranteed to find a root of a continuous function $f$ on an interval $(x_1,x_2)$, if $(x_1,x_2)$ contains a root for $f$. The Bisection method is not very efficient and it requires a search interval that contains only one root.
An alternative is Newton's method (also called the Newton-Raphson method). Consider the graph below. To find the root of the function represented by the blue line, Newton's method starts at a user-defined starting location, $x_0$ (the blue dot) and fits a straight line through the point $(x,y)=(x_0,f(x_0))$ in such a way that the line is tangent to $f(x)$ at $x_0$ (the red line). The intersection of the red line with the horizontal axis is the next estimate $x_1$ of the root of the function (the red dot). This process is repeated until a value of $f(x)$ is found that is sufficiently close to zero (within a specified tolerance), i.e., a straight line is fitted through the point $(x,y)=(x_1,f(x_1))$, tangent to the function, and the the next estimate of the root of the function is taken as the intersection of this line with the horizontal axis, until the value of f at the root estimate is very close to 0.
Unfortunately, not guaranteed that it always works, as is explained below.
<img src="http://i.imgur.com/tK1EOtD.png" alt="Newton's method on wikipedia">
The equation for a straight line with slope $a$ through the point $x_n,f(x_n)$ is:
$$y = a(x-x_n) + f(x_n)$$
For the line to be tangent to the function $f(x)$ at the point $x=x_n$, the slope $a$ has to equal the derivative of $f(x)$ at $x_n$: $a=f'(x_n)$. The intersection of the line with the horizontal axis is the value of $x$ that results in $y=0$ and this is the next estimate $x_{n+1}$ of the root of the function. In order to find this estimate we need to solve:
$$0 = f'(x_n) (x_{n+1}-x_n) + f(x_n)$$
which gives
$$\boxed{x_{n+1} = x_n - f(x_n)/f'(x_n)}$$
The search for the root is completed when $|f(x)|$ is below a user-specified tolerance.
An animated illustration of Newton's method can be found on Wikipedia:
<img src="http://upload.wikimedia.org/wikipedia/commons/e/e0/NewtonIteration_Ani.gif" alt="Newton's method on wikipedia" width="400px">
Newton's method is guaranteed to find the root of a function if the function is well behaved and the search starts close enough to the root. If those two conditions are met, Newton's method is very fast, but if they are not met, the method is not guaranteed to converge to the root.
Another disadvantage of Newton's method is that we need to define the derivative of the function.
Note that the function value does not necessarily go down at every iteration (as illustated in the animation above).
Newton's Method Implementation
We implement Newton's method as function newtonsmethod that takes in the following arguments:
The function for which to find the root.
The derivative of the function.
The starting point of the search $x_0$.
The tolerance tol used as a stopping criterion, by default $10^{-6}$.
The maximum number of iterations nmax, by default 10.
newtonsmethod returns the value of $x$ where $f(x)$ is (approximately) zero or prints a message if the maximum number of iterations is reached before the tolerance is met.
End of explanation
def fp(x):
return np.exp(-x)
xs = 1
func = exponential_function
funcp = fp
tol = 1e-6
nmax = 10
xzero, iterations = newtonsmethod(func, funcp, xs, tol, nmax)
print("First Example")
if xzero != None:
print("Starting search from x = %.2f" % xs)
print("root at x = %f, exponential_function(root) = %f" % (xzero, exponential_function(xzero)))
print("tolerance reached in %d iterations" % iterations)
else:
print("Starting search from x = %.2f" % xs)
print('Max number of iterations reached before convergence')
print("")
xs = 4
nmax = 40
xzero, iterations = newtonsmethod(func, funcp, xs, nmax)
print("Second Example")
if xzero != None:
print("Starting search from x = %.2f" % xs)
print("root at x = %f, exponential_function(root) = %f" % (xzero, exponential_function(xzero)))
print("tolerance reached in %d iterations" % iterations)
else:
print("Starting search from x = %.2f" % xs)
print('Max number of iterations reached before convergence')
Explanation: We test newtonsmethod by finding the root of $f(x)=\frac{1}{2}-\text{e}^{-x}$ using $x_0=1$ as the starting point of the search. How many iterations do we need if we start at $x=4$?
End of explanation
xs = 1
xzero, iterations = newtonsmethod(func=np.sin, funcp=np.cos, xs=1)
if xzero != None:
print("Starting search from x = %.2f" % xs)
print("root at x = %f, sin(root) = %e" % (xzero, np.sin(xzero)))
print("tolerance reached in %d iterations" % iterations)
print("root / pi = %f" % (xzero / np.pi))
else:
print("Starting search from x = %.2f" % xs)
print('Max number of iterations reached before convergence')
print("")
xs = 1.5
xzero, iterations = newtonsmethod(func=np.sin, funcp=np.cos, xs=1.5)
if xzero != None:
print("Starting search from x = %.2f" % xs)
print("root at x = %f, sin(root) = %e" % (xzero, np.sin(xzero)))
print("tolerance reached in %d iterations" % iterations)
print("root / pi = %f" % (xzero / np.pi))
else:
print("Starting search from x = %.2f" % xs)
print('Max number of iterations reached before convergence')
Explanation: We also demonstrate how newton works by finding the zero of $\sin(x)$, which has many roots: $-2\pi$, $-\pi$, $0$, $pi$, $2\pi$, etc. Which root do we find when starting at $x=1$ and which root do we find when starting at $x=1.5$?
End of explanation
from scipy.optimize import fsolve
def h(x):
return np.log(x ** 2) - 2
x0 = fsolve(h, 1)
print("x_root = %f, function value(root) = %e" % (x0, h(x0)))
Explanation: Root finding methods in scipy
The package scipy.optimize includes a number of routines for the minimization of a function and for finding the zeros of a function. Among them, bisect, newton, and fsolve. fsolve has the additional advantage of also estimating the derivative of the function. fsolve can be used to find an (approximate) answer for a system of non-linear equations.
fsolve
We demonstrate how to use thefsolve method of the scipy.optimize package by finding the value for which $\ln(x^2)=2$
End of explanation
from scipy.optimize import fsolve
def g(x):
return x + 2 * np.cos(x)
x = np.linspace(-2, 4, 100)
x0 = fsolve(g, 1)
plt.plot(x, g(x))
plt.plot(x0, g(x0), 'ro')
plt.axhline(y=0, color='r')
Explanation: Plotting the root
We plot the function $f(x)=x+2\cos(x)$ for $x$ going from -2 to 4, and on the same graph, we also plot a red dot at the location where $f(x)=0$.
End of explanation |
570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step6: Game Tree Search
We start with defining the abstract class Game, for turn-taking n-player games. We rely on, but do not define yet, the concept of a state of the game; we'll see later how individual games define states. For now, all we require is that a state has a state.to_move attribute, which gives the name of the player whose turn it is. ("Name" will be something like 'X' or 'O' for tic-tac-toe.)
We also define play_game, which takes a game and a dictionary of {player_name
Step9: Minimax-Based Game Search Algorithms
We will define several game search algorithms. Each takes two inputs, the game we are playing and the current state of the game, and returns a a (value, move) pair, where value is the utility that the algorithm computes for the player whose turn it is to move, and move is the move itself.
First we define minimax_search, which exhaustively searches the game tree to find an optimal move (assuming both players play optimally), and alphabeta_search, which does the same computation, but prunes parts of the tree that could not possibly have an affect on the optimnal move.
Step16: A Simple Game
Step18: States in tic-tac-toe (and other games) will be represented as a Board, which is a subclass of defaultdict that in general will consist of {(x, y)
Step20: Players
We need an interface for players. I'll represent a player as a callable that will be passed two arguments
Step21: Playing a Game
We're ready to play a game. I'll set up a match between a random_player (who chooses randomly from the legal moves) and a player(alphabeta_search) (who makes the optimal alpha-beta move; practical for tic-tac-toe, but not for large games). The player(alphabeta_search) will never lose, but if random_player is lucky, it will be a tie.
Step22: The alpha-beta player will never lose, but sometimes the random player can stumble into a draw. When two optimal (alpha-beta or minimax) players compete, it will always be a draw
Step24: Connect Four
Connect Four is a variant of tic-tac-toe, played on a larger (7 x 6) board, and with the restriction that in any column you can only play in the lowest empty square in the column.
Step26: Transposition Tables
By treating the game tree as a tree, we can arrive at the same state through different paths, and end up duplicating effort. In state-space search, we kept a table of reached states to prevent this. For game-tree search, we can achieve the same effect by applying the @cache decorator to the min_value and max_value functions. We'll use the suffix _tt to indicate a function that uses these transisiton tables.
Step28: For alpha-beta search, we can still use a cache, but it should be based just on the state, not on whatever values alpha and beta have.
Step32: Heuristic Cutoffs
Step33: Monte Carlo Tree Search
Step34: Heuristic Search Algorithms | Python Code:
from collections import namedtuple, Counter, defaultdict
import random
import math
import functools
cache = functools.lru_cache(10**6)
class Game:
A game is similar to a problem, but it has a terminal test instead of
a goal test, and a utility for each terminal state. To create a game,
subclass this class and implement `actions`, `result`, `is_terminal`,
and `utility`. You will also need to set the .initial attribute to the
initial state; this can be done in the constructor.
def actions(self, state):
Return a collection of the allowable moves from this state.
raise NotImplementedError
def result(self, state, move):
Return the state that results from making a move from a state.
raise NotImplementedError
def is_terminal(self, state):
Return True if this is a final state for the game.
return not self.actions(state)
def utility(self, state, player):
Return the value of this final state to player.
raise NotImplementedError
def play_game(game, strategies: dict, verbose=False):
Play a turn-taking game. `strategies` is a {player_name: function} dict,
where function(state, game) is used to get the player's move.
state = game.initial
while not game.is_terminal(state):
player = state.to_move
move = strategies[player](game, state)
state = game.result(state, move)
if verbose:
print('Player', player, 'move:', move)
print(state)
return state
Explanation: Game Tree Search
We start with defining the abstract class Game, for turn-taking n-player games. We rely on, but do not define yet, the concept of a state of the game; we'll see later how individual games define states. For now, all we require is that a state has a state.to_move attribute, which gives the name of the player whose turn it is. ("Name" will be something like 'X' or 'O' for tic-tac-toe.)
We also define play_game, which takes a game and a dictionary of {player_name: strategy_function} pairs, and plays out the game, on each turn checking state.to_move to see whose turn it is, and then getting the strategy function for that player and applying it to the game and the state to get a move.
End of explanation
def minimax_search(game, state):
Search game tree to determine best move; return (value, move) pair.
player = state.to_move
def max_value(state):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = -infinity, None
for a in game.actions(state):
v2, _ = min_value(game.result(state, a))
if v2 > v:
v, move = v2, a
return v, move
def min_value(state):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = +infinity, None
for a in game.actions(state):
v2, _ = max_value(game.result(state, a))
if v2 < v:
v, move = v2, a
return v, move
return max_value(state)
infinity = math.inf
def alphabeta_search(game, state):
Search game to determine best action; use alpha-beta pruning.
As in [Figure 5.7], this version searches all the way to the leaves.
player = state.to_move
def max_value(state, alpha, beta):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = -infinity, None
for a in game.actions(state):
v2, _ = min_value(game.result(state, a), alpha, beta)
if v2 > v:
v, move = v2, a
alpha = max(alpha, v)
if v >= beta:
return v, move
return v, move
def min_value(state, alpha, beta):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = +infinity, None
for a in game.actions(state):
v2, _ = max_value(game.result(state, a), alpha, beta)
if v2 < v:
v, move = v2, a
beta = min(beta, v)
if v <= alpha:
return v, move
return v, move
return max_value(state, -infinity, +infinity)
Explanation: Minimax-Based Game Search Algorithms
We will define several game search algorithms. Each takes two inputs, the game we are playing and the current state of the game, and returns a a (value, move) pair, where value is the utility that the algorithm computes for the player whose turn it is to move, and move is the move itself.
First we define minimax_search, which exhaustively searches the game tree to find an optimal move (assuming both players play optimally), and alphabeta_search, which does the same computation, but prunes parts of the tree that could not possibly have an affect on the optimnal move.
End of explanation
class TicTacToe(Game):
Play TicTacToe on an `height` by `width` board, needing `k` in a row to win.
'X' plays first against 'O'.
def __init__(self, height=3, width=3, k=3):
self.k = k # k in a row
self.squares = {(x, y) for x in range(width) for y in range(height)}
self.initial = Board(height=height, width=width, to_move='X', utility=0)
def actions(self, board):
Legal moves are any square not yet taken.
return self.squares - set(board)
def result(self, board, square):
Place a marker for current player on square.
player = board.to_move
board = board.new({square: player}, to_move=('O' if player == 'X' else 'X'))
win = k_in_row(board, player, square, self.k)
board.utility = (0 if not win else +1 if player == 'X' else -1)
return board
def utility(self, board, player):
Return the value to player; 1 for win, -1 for loss, 0 otherwise.
return board.utility if player == 'X' else -board.utility
def is_terminal(self, board):
A board is a terminal state if it is won or there are no empty squares.
return board.utility != 0 or len(self.squares) == len(board)
def display(self, board): print(board)
def k_in_row(board, player, square, k):
True if player has k pieces in a line through square.
def in_row(x, y, dx, dy): return 0 if board[x, y] != player else 1 + in_row(x + dx, y + dy, dx, dy)
return any(in_row(*square, dx, dy) + in_row(*square, -dx, -dy) - 1 >= k
for (dx, dy) in ((0, 1), (1, 0), (1, 1), (1, -1)))
Explanation: A Simple Game: Tic-Tac-Toe
We have the notion of an abstract game, we have some search functions; now it is time to define a real game; a simple one, tic-tac-toe. Moves are (x, y) pairs denoting squares, where (0, 0) is the top left, and (2, 2) is the bottom right (on a board of size height=width=3).
End of explanation
class Board(defaultdict):
A board has the player to move, a cached utility value,
and a dict of {(x, y): player} entries, where player is 'X' or 'O'.
empty = '.'
off = '#'
def __init__(self, width=8, height=8, to_move=None, **kwds):
self.__dict__.update(width=width, height=height, to_move=to_move, **kwds)
def new(self, changes: dict, **kwds) -> 'Board':
"Given a dict of {(x, y): contents} changes, return a new Board with the changes."
board = Board(width=self.width, height=self.height, **kwds)
board.update(self)
board.update(changes)
return board
def __missing__(self, loc):
x, y = loc
if 0 <= x < self.width and 0 <= y < self.height:
return self.empty
else:
return self.off
def __hash__(self):
return hash(tuple(sorted(self.items()))) + hash(self.to_move)
def __repr__(self):
def row(y): return ' '.join(self[x, y] for x in range(self.width))
return '\n'.join(map(row, range(self.height))) + '\n'
Explanation: States in tic-tac-toe (and other games) will be represented as a Board, which is a subclass of defaultdict that in general will consist of {(x, y): contents} pairs, for example {(0, 0): 'X', (1, 1): 'O'} might be the state of the board after two moves. Besides the contents of squares, a board also has some attributes:
- .to_move to name the player whose move it is;
- .width and .height to give the size of the board (both 3 in tic-tac-toe, but other numbers in related games);
- possibly other attributes, as specified by keywords.
As a defaultdict, the Board class has a __missing__ method, which returns empty for squares that have no been assigned but are within the width × height boundaries, or off otherwise. The class has a __hash__ method, so instances can be stored in hash tables.
End of explanation
def random_player(game, state): return random.choice(list(game.actions(state)))
def player(search_algorithm):
A game player who uses the specified search algorithm
return lambda game, state: search_algorithm(game, state)[1]
Explanation: Players
We need an interface for players. I'll represent a player as a callable that will be passed two arguments: (game, state) and will return a move.
The function player creates a player out of a search algorithm, but you can create your own players as functions, as is done with random_player below:
End of explanation
play_game(TicTacToe(), dict(X=random_player, O=player(alphabeta_search)), verbose=True).utility
Explanation: Playing a Game
We're ready to play a game. I'll set up a match between a random_player (who chooses randomly from the legal moves) and a player(alphabeta_search) (who makes the optimal alpha-beta move; practical for tic-tac-toe, but not for large games). The player(alphabeta_search) will never lose, but if random_player is lucky, it will be a tie.
End of explanation
play_game(TicTacToe(), dict(X=player(alphabeta_search), O=player(minimax_search)), verbose=True).utility
Explanation: The alpha-beta player will never lose, but sometimes the random player can stumble into a draw. When two optimal (alpha-beta or minimax) players compete, it will always be a draw:
End of explanation
class ConnectFour(TicTacToe):
def __init__(self): super().__init__(width=7, height=6, k=4)
def actions(self, board):
In each column you can play only the lowest empty square in the column.
return {(x, y) for (x, y) in self.squares - set(board)
if y == board.height - 1 or (x, y + 1) in board}
play_game(ConnectFour(), dict(X=random_player, O=random_player), verbose=True).utility
Explanation: Connect Four
Connect Four is a variant of tic-tac-toe, played on a larger (7 x 6) board, and with the restriction that in any column you can only play in the lowest empty square in the column.
End of explanation
def minimax_search_tt(game, state):
Search game to determine best move; return (value, move) pair.
player = state.to_move
@cache
def max_value(state):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = -infinity, None
for a in game.actions(state):
v2, _ = min_value(game.result(state, a))
if v2 > v:
v, move = v2, a
return v, move
@cache
def min_value(state):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = +infinity, None
for a in game.actions(state):
v2, _ = max_value(game.result(state, a))
if v2 < v:
v, move = v2, a
return v, move
return max_value(state)
Explanation: Transposition Tables
By treating the game tree as a tree, we can arrive at the same state through different paths, and end up duplicating effort. In state-space search, we kept a table of reached states to prevent this. For game-tree search, we can achieve the same effect by applying the @cache decorator to the min_value and max_value functions. We'll use the suffix _tt to indicate a function that uses these transisiton tables.
End of explanation
def cache1(function):
"Like lru_cache(None), but only considers the first argument of function."
cache = {}
def wrapped(x, *args):
if x not in cache:
cache[x] = function(x, *args)
return cache[x]
return wrapped
def alphabeta_search_tt(game, state):
Search game to determine best action; use alpha-beta pruning.
As in [Figure 5.7], this version searches all the way to the leaves.
player = state.to_move
@cache1
def max_value(state, alpha, beta):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = -infinity, None
for a in game.actions(state):
v2, _ = min_value(game.result(state, a), alpha, beta)
if v2 > v:
v, move = v2, a
alpha = max(alpha, v)
if v >= beta:
return v, move
return v, move
@cache1
def min_value(state, alpha, beta):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = +infinity, None
for a in game.actions(state):
v2, _ = max_value(game.result(state, a), alpha, beta)
if v2 < v:
v, move = v2, a
beta = min(beta, v)
if v <= alpha:
return v, move
return v, move
return max_value(state, -infinity, +infinity)
%time play_game(TicTacToe(), {'X':player(alphabeta_search_tt), 'O':player(minimax_search_tt)})
%time play_game(TicTacToe(), {'X':player(alphabeta_search), 'O':player(minimax_search)})
Explanation: For alpha-beta search, we can still use a cache, but it should be based just on the state, not on whatever values alpha and beta have.
End of explanation
def cutoff_depth(d):
A cutoff function that searches to depth d.
return lambda game, state, depth: depth > d
def h_alphabeta_search(game, state, cutoff=cutoff_depth(6), h=lambda s, p: 0):
Search game to determine best action; use alpha-beta pruning.
As in [Figure 5.7], this version searches all the way to the leaves.
player = state.to_move
@cache1
def max_value(state, alpha, beta, depth):
if game.is_terminal(state):
return game.utility(state, player), None
if cutoff(game, state, depth):
return h(state, player), None
v, move = -infinity, None
for a in game.actions(state):
v2, _ = min_value(game.result(state, a), alpha, beta, depth+1)
if v2 > v:
v, move = v2, a
alpha = max(alpha, v)
if v >= beta:
return v, move
return v, move
@cache1
def min_value(state, alpha, beta, depth):
if game.is_terminal(state):
return game.utility(state, player), None
if cutoff(game, state, depth):
return h(state, player), None
v, move = +infinity, None
for a in game.actions(state):
v2, _ = max_value(game.result(state, a), alpha, beta, depth + 1)
if v2 < v:
v, move = v2, a
beta = min(beta, v)
if v <= alpha:
return v, move
return v, move
return max_value(state, -infinity, +infinity, 0)
%time play_game(TicTacToe(), {'X':player(h_alphabeta_search), 'O':player(h_alphabeta_search)})
%time play_game(ConnectFour(), {'X':player(h_alphabeta_search), 'O':random_player}, verbose=True).utility
%time play_game(ConnectFour(), {'X':player(h_alphabeta_search), 'O':player(h_alphabeta_search)}, verbose=True).utility
class CountCalls:
Delegate all attribute gets to the object, and count them in ._counts
def __init__(self, obj):
self._object = obj
self._counts = Counter()
def __getattr__(self, attr):
"Delegate to the original object, after incrementing a counter."
self._counts[attr] += 1
return getattr(self._object, attr)
def report(game, searchers):
for searcher in searchers:
game = CountCalls(game)
searcher(game, game.initial)
print('Result states: {:7,d}; Terminal tests: {:7,d}; for {}'.format(
game._counts['result'], game._counts['is_terminal'], searcher.__name__))
report(TicTacToe(), (alphabeta_search_tt, alphabeta_search, h_alphabeta_search, minimax_search_tt))
Explanation: Heuristic Cutoffs
End of explanation
class Node:
def __init__(self, parent, )
def mcts(state, game, N=1000):
Explanation: Monte Carlo Tree Search
End of explanation
t = CountCalls(TicTacToe())
play_game(t, dict(X=minimax_player, O=minimax_player), verbose=True)
t._counts
for tactic in (three, fork, center, opposite_corner, corner, any):
for s in squares:
if tactic(board, s,player): return s
for s ins quares:
if tactic(board, s, opponent): return s
def ucb(U, N, C=2**0.5, parentN=100):
return round(U/N + C * math.sqrt(math.log(parentN)/N), 2)
{C: (ucb(60, 79, C), ucb(1, 10, C), ucb(2, 11, C))
for C in (1.4, 1.5)}
def ucb(U, N, parentN=100, C=2):
return U/N + C * math.sqrt(math.log(parentN)/N)
C = 1.4
class Node:
def __init__(self, name, children=(), U=0, N=0, parent=None, p=0.5):
self.__dict__.update(name=name, U=U, N=N, parent=parent, children=children, p=p)
for c in children:
c.parent = self
def __repr__(self):
return '{}:{}/{}={:.0%}{}'.format(self.name, self.U, self.N, self.U/self.N, self.children)
def select(n):
if n.children:
return select(max(n.children, key=ucb))
else:
return n
def back(n, amount):
if n:
n.N += 1
n.U += amount
back(n.parent, 1 - amount)
def one(root):
n = select(root)
amount = int(random.uniform(0, 1) < n.p)
back(n, amount)
def ucb(n):
return (float('inf') if n.N == 0 else
n.U / n.N + C * math.sqrt(math.log(n.parent.N)/n.N))
tree = Node('root', [Node('a', p=.8, children=[Node('a1', p=.05),
Node('a2', p=.25,
children=[Node('a2a', p=.7), Node('a2b')])]),
Node('b', p=.5, children=[Node('b1', p=.6,
children=[Node('b1a', p=.3), Node('b1b')]),
Node('b2', p=.4)]),
Node('c', p=.1)])
for i in range(100):
one(tree);
for c in tree.children: print(c)
'select', select(tree), 'tree', tree
us = (100, 50, 25, 10, 5, 1)
infinity = float('inf')
@lru_cache(None)
def f1(n, denom):
return (0 if n == 0 else
infinity if n < 0 or not denom else
min(1 + f1(n - denom[0], denom),
f1(n, denom[1:])))
@lru_cache(None)
def f2(n, denom):
@lru_cache(None)
def f(n):
return (0 if n == 0 else
infinity if n < 0 else
1 + min(f(n - d) for d in denom))
return f(n)
@lru_cache(None)
def f3(n, denom):
return (0 if n == 0 else
infinity if n < 0 or not denom else
min(k + f2(n - k * denom[0], denom[1:])
for k in range(1 + n // denom[0])))
def g(n, d=us): return f1(n, d), f2(n, d), f3(n, d)
n = 12345
%time f1(n, us)
%time f2(n, us)
%time f3(n, us)
Explanation: Heuristic Search Algorithms
End of explanation |
571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Compare-weighted-and-unweighted-mean-temperature" data-toc-modified-id="Compare-weighted-and-unweighted-mean-temperature-1"><span class="toc-item-num">1 </span>Compare weighted and unweighted mean temperature</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Data" data-toc-modified-id="Data-1.0.1"><span class="toc-item-num">1.0.1 </span>Data</a></span></li><li><span><a href="#Creating-weights" data-toc-modified-id="Creating-weights-1.0.2"><span class="toc-item-num">1.0.2 </span>Creating weights</a></span></li><li><span><a href="#Weighted-mean" data-toc-modified-id="Weighted-mean-1.0.3"><span class="toc-item-num">1.0.3 </span>Weighted mean</a></span></li><li><span><a href="#Plot
Step1: Data
Load the data, convert to celsius, and resample to daily values
Step2: Plot the first timestep
Step3: Creating weights
For a rectangular grid the cosine of the latitude is proportional to the grid cell area.
Step4: Weighted mean
Step5: Plot | Python Code:
%matplotlib inline
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Compare-weighted-and-unweighted-mean-temperature" data-toc-modified-id="Compare-weighted-and-unweighted-mean-temperature-1"><span class="toc-item-num">1 </span>Compare weighted and unweighted mean temperature</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Data" data-toc-modified-id="Data-1.0.1"><span class="toc-item-num">1.0.1 </span>Data</a></span></li><li><span><a href="#Creating-weights" data-toc-modified-id="Creating-weights-1.0.2"><span class="toc-item-num">1.0.2 </span>Creating weights</a></span></li><li><span><a href="#Weighted-mean" data-toc-modified-id="Weighted-mean-1.0.3"><span class="toc-item-num">1.0.3 </span>Weighted mean</a></span></li><li><span><a href="#Plot:-comparison-with-unweighted-mean" data-toc-modified-id="Plot:-comparison-with-unweighted-mean-1.0.4"><span class="toc-item-num">1.0.4 </span>Plot: comparison with unweighted mean</a></span></li></ul></li></ul></li></ul></div>
Compare weighted and unweighted mean temperature
Author: Mathias Hauser
We use the air_temperature example dataset to calculate the area-weighted temperature over its domain. This dataset has a regular latitude/ longitude grid, thus the grid cell area decreases towards the pole. For this grid we can use the cosine of the latitude as proxy for the grid cell area.
End of explanation
ds = xr.tutorial.load_dataset("air_temperature")
# to celsius
air = ds.air - 273.15
# resample from 6-hourly to daily values
air = air.resample(time="D").mean()
air
Explanation: Data
Load the data, convert to celsius, and resample to daily values
End of explanation
projection = ccrs.LambertConformal(central_longitude=-95, central_latitude=45)
f, ax = plt.subplots(subplot_kw=dict(projection=projection))
air.isel(time=0).plot(transform=ccrs.PlateCarree(), cbar_kwargs=dict(shrink=0.7))
ax.coastlines()
Explanation: Plot the first timestep:
End of explanation
weights = np.cos(np.deg2rad(air.lat))
weights.name = "weights"
weights
Explanation: Creating weights
For a rectangular grid the cosine of the latitude is proportional to the grid cell area.
End of explanation
air_weighted = air.weighted(weights)
air_weighted
weighted_mean = air_weighted.mean(("lon", "lat"))
weighted_mean
Explanation: Weighted mean
End of explanation
weighted_mean.plot(label="weighted")
air.mean(("lon", "lat")).plot(label="unweighted")
plt.legend()
Explanation: Plot: comparison with unweighted mean
Note how the weighted mean temperature is higher than the unweighted.
End of explanation |
572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
Shenfun - High-Performance Computing platform for the Spectral Galerkin method
<div><img src="https
Step1: Inside the terminal any Python code can be executed and if something is printed it is shown below.
Step2: When in presentation mode
like now, the terminal is not alive. However, this presentation is written with jupyter
<center><img src="https
Step3: Shen's bases with Dirichlet bcs
<p style="margin-bottom
Step4: Shen's bases with Neumann $u'(\pm 1) = 0$
<p style="margin-bottom
Step5: Shen's biharmonic bases $u(\pm 1) = u'(\pm 1) = 0$
<p style="margin-bottom
Step6: Multidimensional tensor product spaces
<p style="margin-bottom
Step7: Basis functions can be created for all bases
Step8: The shenfun Function represents the solution
uh = Function(L0)
$$
u(x) = \sum_{k=0}^{N-1} \hat{u}k \phi{k}(x)
$$
The function evaluated for all quadrature points, ${u(x_j)}_{j=0}^{N-1}$, is an Array
uj = Array(L0)
There is a (fast) backward transform for moving from Function to Array, and a forward transform to go the other way. Note that the Array is not a basis function!
Step9: Operators in shenfun act on basis functions
u is an instance of either TestFunction, TrialFunction or Function
div(u)
grad(u)
curl(u)
Dx(u, 0, 1) (partial derivative in x-direction)
Assembly
project
inner
Step10: Implementation closely matches mathematics
<p style="margin-bottom
Step11: A diagonal stiffness matrix!
Complete Poisson solver with error verification in 1D
Step12: 2D - Implementation still closely matching mathematics
Step13: ?
A is a list of two TPMatrix objects???
TPMatrix is a Tensor Product matrix
A TPMatrix is the outer product of smaller matrices (2 in 2D, 3 in 3D etc).
Consider the inner product
Step14: 3D Poisson (with MPI and Fourier x 2)
Step15: Contour plot of slice with constant y
Step16: Run with MPI distribution of arrays
Here we would normally run from a bash shell
<p style="margin-bottom
Step17: Note that Fourier bases are especially attractive because of features easily handled with MPI
Step18: Mixed tensor product spaces
Solve several equations simultaneously
Coupled equations
Block matrices and vectors
Tensor spaces of vectors, like velocity $u \in [\mathbb{R}^3]^3$
Stokes equations
lid-driven cavity - coupled solver
<p style="margin-bottom
Step19: Implementation Stokes - matrices and solve
Step20: Block matrix M
$$
M =
\begin{bmatrix}
A[0]+A[1] & 0 & G[0] \
0 & A[2]+A[3] & G[1] \
D[0] & D[1] & 0
\end{bmatrix}
$$
where $D = G^T$ for the Legendre basis, making $M$ symmetric. For Chebyshev $M$ will not be symmetric.
Solver through scipy.sparse.linalg
For Navier-Stokes of the lid-driven cavity, see https | Python Code:
print('hello world icsca')
Explanation: <center>
Shenfun - High-Performance Computing platform for the Spectral Galerkin method
<div><img src="https://rawcdn.githack.com/spectralDNS/spectralutilities/f3419a3e6c40dad55be5dcca51f6e0e21713dd90/figures/Chebyshev_Polynomials_of_the_First_Kind.svg" width="300"></div>
<div class="sl-block" style="height: auto; width: 600px;">
<div>
<p><center style="font-size:1.2em">Professor Mikael Mortensen</p>
<p><center>Department of Mathematics, University of Oslo</p>
<p><center>Presented at the International Conference on Scientific Computing and Applications (ICSCA), Xiamen, China, 29/5 - 2019</p>
</div>
</div>
Shenfun - facts
Shenfun is named in honour of <strong>Professor Jie Shen</strong> for his seminal work on the spectral Galerkin method:-)
Shenfun is a high performance computing platform for solving partial differential equations (PDEs) with the spectral Galerkin method (with numerical integration).
Shenfun has been run with 65,000 processors on a Cray XC40.
Shenfun is a high-level <strong>Python</strong> package originally developed for large-scale pseudo-spectral turbulence simulations.
<img src="https://rawcdn.githack.com/spectralDNS/spectralutilities/473129742f0b5f8d57e8c647809272c0ced99a45/movies/RB_200k_small.png" style="float:left" width="300"> <img src="https://rawcdn.githack.com/spectralDNS/spectralutilities/473129742f0b5f8d57e8c647809272c0ced99a45/movies/isotropic_cropped.gif" style="float:right" width="200">
<p style="clear: both;">
# Python is a scripting language
<p style="margin-bottom:1cm;">
No compilation - just execute. Much like MATLAB. High-level coding very popular in the scientific computing community.
In this presentation this is a Python terminal:
End of explanation
print(2+2)
Explanation: Inside the terminal any Python code can be executed and if something is printed it is shown below.
End of explanation
from shenfun import *
N = 8
C = FunctionSpace(N, 'Chebyshev')
L = FunctionSpace(N, 'Legendre')
x, w = C.points_and_weights()
print(np.vstack((x, w)).T)
Explanation: When in presentation mode
like now, the terminal is not alive. However, this presentation is written with jupyter
<center><img src="https://rawcdn.githack.com/spectralDNS/spectralutilities/a6cccf4e2959c13cd4b7a6cf9d092b0ef11e7d5e/figures/jupyter.jpg" width=100></center>
and if opened in active mode, then all boxes like the one below would be live and active and ready to execute any Python code:
If interested (it is really not necessary), then
Open https://github.com/spectralDNS/shenfun/ and press the launch-binder button.
Wait for binder to launch and choose the Shenfun presentation.ipynb file to get a live document. There are also some other demos there written in jupyter.
<center>
<div><img src="https://gifimage.net/wp-content/uploads/2017/09/ajax-loading-gif-transparent-background-2.gif" width="100"></div>
<div class="sl-block" style="height: auto; width: 500px;">
<div>
<p><center style="font-size:1.2em">Meanwhile (may take a few minutes and really not necessary) I'll go through some background material required for understanding how <strong>shenfun</strong> works 😀</p>
</div>
</div>
The Spectral Galerkin method (in a nutshell)
approximates solutions $u(x)$ using global <strong>trial</strong> functions $\phi_k(x)$ and unknown expansion coefficients $\hat{u}_k$
$$
u(x) = \sum_{k=0}^{N-1}\hat{u}_k \phi_k(x)
$$
Multidimensional solutions are formed from outer products of 1D bases
$$
u(x, y) = \sum_{k=0}^{N_0-1}\sum_{l=0}^{N_1-1}\hat{u}{kl} \phi{kl}(x, y)\quad \text{ or }\quad
u(x, y, z) = \sum_{k=0}^{N_0-1}\sum_{l=0}^{N_1-1} \sum_{m=0}^{N_2-1}\hat{u}{klm} \phi{klm}(x, y, z)
$$
where, for example
$$
\begin{align}
\phi_{kl}(x, y) &= T_k(x) L_l(y)\
\phi_{klm}(x, y, z) &= T_k(x) L_l(y) \exp(\text{i}mz)
\end{align}
$$
$T_k$ and $L_k$ are Chebyshev and Legendre polynomials.
The Spectral Galerkin method
solves PDEs, like Poisson's equation
\begin{align}
\nabla^2 u(x) &= f(x), \quad x \in [-1, 1] \
u(\pm 1) &= 0
\end{align}
using variational forms by the <strong>method of weighted residuals</strong>. I.e., multiply PDE by a test function $v$ and integrate over the domain. For Poisson this leads to the problem:
Find $u \in H^1_0$ such that
$$(\nabla u, \nabla v)_w^N = -(f, v)_w^N \quad \forall v \in H^1_0$$
Here $(u, v)_w^{N}$ is a weighted inner product and $v(=\phi_j)$ is a <strong>test</strong> function. Note that test and trial functions are the same for the Galerkin method.
Weighted inner products
The weighted inner product is defined as
$$
(u, v)w = \int{\Omega} u \overline{v} w \, d\Omega,
$$
where $w(\mathbf{x})$ is a weight associated with the chosen basis (different bases have different weights). The overline represents a complex conjugate (for Fourier).
$\Omega$ is a tensor product domain spanned by the chosen 1D bases.
In Shenfun quadrature is used for the integrals
1D with Chebyshev basis:
$$
(u, v)w ^N = \sum{i=0}^{N-1} u(x_i) v(x_i) \omega_i \approx \int_{-1}^1 \frac{u v}{\sqrt{1-x^2}} \, {dx},
$$
where ${\omega_i}{i=0}^{N-1}$ are the quadrature weights associated with the chosen basis and quadrature rule. The associated quadrature points are denoted as ${x_i}{i=0}^{N-1}$.
2D with mixed Chebyshev-Fourier:
$$
(u, v)w^N = \int{-1}^1\int_{0}^{2\pi} \frac{u \overline{v}}{2\pi\sqrt{1-x^2}} \, {dxdy} \approx \sum_{i=0}^{N_0-1}\sum_{j=0}^{N_1-1} u(x_i, y_j) \overline{v}(x_i, y_j) \omega^{(x)}_i \omega_j^{(y)} ,
$$
Spectral Galerkin solution procedure
Choose function space(s) satisfying correct boundary conditions
Transform PDEs to variational forms with inner products
Assemble variational forms and solve resulting linear algebra systems
Orthogonal bases
<p style="margin-bottom:1cm;">
| Family | Basis | Domain |
| :---: | :---: | :---: |
| Chebyshev | $$\{T_k\}_{k=0}^{N-1}$$ | $$[-1, 1]$$ |
| Legendre | $$\{L_k\}_{k=0}^{N-1}$$ | $$[-1, 1]$$ |
| Fourier | $$\{\exp(\text{i}kx)\}_{k=-N/2}^{N/2-1}$$| $$[0, 2\pi]$$ |
| Hermite | $$\{H_k\}_{k=0}^{N-1}$$ | $$[-\infty, \infty]$$|
| Laguerre | $$\{La_k\}_{k=0}^{N-1}$$ | $$[0, \infty]$$ |
End of explanation
C0 = FunctionSpace(N, 'Chebyshev', bc=(0, 0))
L0 = FunctionSpace(N, 'Legendre', bc=(0, 0))
H0 = FunctionSpace(N, 'Hermite')
La = FunctionSpace(N, 'Laguerre', bc=(0, None))
Explanation: Shen's bases with Dirichlet bcs
<p style="margin-bottom:1cm;">
| family | Basis | Boundary condition |
|-----------|-----------------------|----------|
| Chebyshev | $$\{T_k-T_{k+2}\}_{k=0}^{N-3}$$ | $$u(\pm 1) = 0$$ |
| Legendre | $$\{L_k-L_{k+2}\}_{k=0}^{N-3}$$ | $$u(\pm 1) = 0$$ |
| Hermite | $$\exp(-x^2)\{H_k\}_{k=0}^{N-1}$$ | $$u(\pm \infty) = 0$$ |
| Laguerre | $$\exp(-x/2)\{La_k-La_{k+1}\}_{k=0}^{N-2}$$| $$u(0) = u(\infty) = 0$$ |
End of explanation
CN = FunctionSpace(N, 'Chebyshev', bc={'left': {'N': 0}, 'right': {'N': 0}})
LN = FunctionSpace(N, 'Legendre', bc={'left': {'N': 0}, 'right': {'N': 0}})
Explanation: Shen's bases with Neumann $u'(\pm 1) = 0$
<p style="margin-bottom:1cm;">
| family | Basis |
|-----------|-----------------------|
| Chebyshev | $$\left\{T_k-\frac{k^2}{(k+2)^2}T_{k+2}\right\}_{k=0}^{N-3}$$ |
| Legendre | $$\left\{L_k-\frac{k(k+1)}{(k+2)(k+3)}L_{k+2}\right\}_{k=0}^{N-3}$$ |
End of explanation
CB = FunctionSpace(N, 'Chebyshev', bc=(0, 0, 0, 0))
LB = FunctionSpace(N, 'Legendre', bc=(0, 0, 0, 0))
Explanation: Shen's biharmonic bases $u(\pm 1) = u'(\pm 1) = 0$
<p style="margin-bottom:1cm;">
| family | Basis |
|-----------| :-----------------: |
| Chebyshev | $$\left\{T_k-\frac{2(k+2)}{k+3}T_{k+2}+\frac{k+1}{k+3} T_{k+4}\right\}_{k=0}^{N-5}$$ |
| Legendre | $$\left\{L_k-\frac{2(2k+5)}{(2k+7)}L_{k+2}+\frac{2k+3}{2k+7}L_{k+4}\right\}_{k=0}^{N-5}$$ |
End of explanation
L0 = FunctionSpace(N, 'Legendre', bc=(0, 0))
C0 = FunctionSpace(N, 'Chebyshev', bc=(0, 0))
L1 = FunctionSpace(N, 'Legendre')
LL = TensorProductSpace(comm, (L0, L1)) # comm is MPI.COMM_WORLD
CL = TensorProductSpace(comm, (C0, L1))
Explanation: Multidimensional tensor product spaces
<p style="margin-bottom:0.5cm;">
$$
\begin{align}
L_0 &= \{L_k(x)-L_{k+2}(x)\}_{k=0}^{N-3} \\
C_0 &= \{T_k(x)-T_{k+2}(x)\}_{k=0}^{N-3} \\
L_1 &= \{L_l(y)\}_{l=0}^{N-1} \\
LL(x, y) &= L_0(x) \times L_1(y) \\
CL(x, y) &= C_0(x) \times L_1(y)
\end{align}
$$
End of explanation
L0 = FunctionSpace(N, 'Legendre', bc=(0, 0))
L1 = FunctionSpace(N, 'Legendre')
# 1D
u = TrialFunction(L0)
v = TestFunction(L0)
uh = Function(L0)
uj = Array(L0)
# 2D
LL = TensorProductSpace(comm, (L0, L1)) # comm is MPI.COMM_WORLD
u = TrialFunction(LL)
v = TestFunction(LL)
uh = Function(LL)
uj = Array(LL)
Explanation: Basis functions can be created for all bases
End of explanation
L0 = FunctionSpace(N, 'Legendre', bc=(0, 0))
uh = Function(L0)
uj = Array(L0)
# Move back and forth
uj = uh.backward(uj)
uh = uj.forward(uh)
Explanation: The shenfun Function represents the solution
uh = Function(L0)
$$
u(x) = \sum_{k=0}^{N-1} \hat{u}k \phi{k}(x)
$$
The function evaluated for all quadrature points, ${u(x_j)}_{j=0}^{N-1}$, is an Array
uj = Array(L0)
There is a (fast) backward transform for moving from Function to Array, and a forward transform to go the other way. Note that the Array is not a basis function!
End of explanation
L0 = FunctionSpace(N, 'Legendre', bc=(0, 0))
L1 = FunctionSpace(N, 'Legendre')
u = TrialFunction(L0)
v = TestFunction(L0)
uh = Function(L0)
du = grad(u) # vector valued expression
g = div(du) # scalar valued expression
c = project(Dx(uh, 0, 1), L1) # project expressions with Functions
Explanation: Operators in shenfun act on basis functions
u is an instance of either TestFunction, TrialFunction or Function
div(u)
grad(u)
curl(u)
Dx(u, 0, 1) (partial derivative in x-direction)
Assembly
project
inner
End of explanation
A = inner(grad(u), grad(v))
dict(A)
print(A.diags().todense())
Explanation: Implementation closely matches mathematics
<p style="margin-bottom:1cm;">
$$
A = (\nabla u, \nabla v)_w^N
$$
End of explanation
# Solve Poisson's equation
from sympy import symbols, sin, lambdify
from shenfun import *
# Use sympy to compute manufactured solution
x = symbols("x")
ue = sin(4*np.pi*x)*(1-x**2) # `ue` is the manufactured solution
fe = ue.diff(x, 2) # `fe` is Poisson's right hand side for `ue`
SD = FunctionSpace(2000, 'L', bc=(0, 0))
u = TrialFunction(SD)
v = TestFunction(SD)
b = inner(v, Array(SD, buffer=fe)) # Array is initialized with `fe`
A = inner(v, div(grad(u)))
uh = Function(SD)
uh = A.solve(b, uh) # Very fast O(N) solver
print(uh.backward()-Array(SD, buffer=ue))
Explanation: A diagonal stiffness matrix!
Complete Poisson solver with error verification in 1D
End of explanation
L0 = FunctionSpace(N, 'Legendre', bc=(0, 0))
F1 = FunctionSpace(N, 'Fourier', dtype='d')
TP = TensorProductSpace(comm, (L0, F1))
u = TrialFunction(TP)
v = TestFunction(TP)
A = inner(grad(u), grad(v))
print(A)
Explanation: 2D - Implementation still closely matching mathematics
End of explanation
A = inner(grad(u), grad(v)) # <- list of two TPMatrices
print(A[0].mats)
print('Or as dense matrices:')
for mat in A[0].mats:
print(mat.diags().todense())
print(A[1].mats)
print(A[1].scale) # l^2
Explanation: ?
A is a list of two TPMatrix objects???
TPMatrix is a Tensor Product matrix
A TPMatrix is the outer product of smaller matrices (2 in 2D, 3 in 3D etc).
Consider the inner product:
$$
\begin{align}
(\nabla u, \nabla v)w &=\int{-1}^{1}\int_{0}^{2\pi} \left(\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}\right) \cdot \left(\frac{\partial \overline{v}}{\partial x}, \frac{\partial \overline{v}}{\partial y}\right) \frac{dxdy}{2\pi} \
(\nabla u, \nabla v)w &= \int{-1}^1 \int_{0}^{2\pi} \frac{\partial u}{\partial x}\frac{\partial \overline{v}}{\partial x} \frac{dxdy}{2\pi} + \int_{-1}^1 \int_{0}^{2\pi} \frac{\partial u}{\partial y}\frac{\partial \overline{v}}{\partial y} \frac{dxdy}{2\pi}
\end{align}
$$
which, like A, is a sum of two terms. These two terms are the two TPMatrixes returned by inner above.
Now each one of these two terms can be written as the outer product of two smaller matrices.
Consider the first, inserting for test and trial functions
$$
\begin{align}
v &= \phi_{kl} = (L_k(x)-L_{k+2}(x))\exp(\text{i}ly) \
u &= \phi_{mn}
\end{align}
$$
The first term becomes
$$
\small
\begin{align}
\int_{-1}^1 \int_{0}^{2\pi} \frac{\partial u}{\partial x}\frac{\partial \overline{v}}{\partial x} \frac{dxdy}{2\pi} &= \underbrace{\int_{-1}^1 \frac{\partial (L_m-L_{m+2})}{\partial x}\frac{\partial (L_k-L_{k+2})}{\partial x} {dx}}{a{km}} \underbrace{\int_{0}^{2\pi} \exp(iny) \exp(-ily) \frac{dy}{2\pi}}{\delta{ln}} \
&= a_{km} \delta_{ln}
\end{align}
$$
and the second
$$
\small
\begin{align}
\int_{-1}^1 \int_{0}^{2\pi} \frac{\partial u}{\partial y}\frac{\partial \overline{v}}{\partial y} \frac{dxdy}{2\pi} &= \underbrace{\int_{-1}^1 (L_m-L_{m+2})(L_k-L_{k+2}) {dx}}{b{km}} \underbrace{\int_{0}^{2\pi} ln \exp(iny) \exp(-ily)\frac{dy}{2\pi}}{l^2\delta{ln}} \
&= l^2 b_{km} \delta_{ln}
\end{align}
$$
All in all:
$$
(\nabla u, \nabla v)w = \left(a{km} \delta_{ln} + l^2 b_{km} \delta_{ln}\right)
$$
$$
(\nabla u, \nabla v)w = \left(a{km} \delta_{ln} + l^2 b_{km} \delta_{ln}\right)
$$
End of explanation
import matplotlib.pyplot as plt
from sympy import symbols, sin, cos, lambdify
from shenfun import *
# Use sympy to compute manufactured solution
x, y, z = symbols("x,y,z")
ue = (cos(4*x) + sin(2*y) + sin(4*z))*(1-x**2)
fe = ue.diff(x, 2) + ue.diff(y, 2) + ue.diff(z, 2)
C0 = FunctionSpace(32, 'Chebyshev', bc=(0, 0))
F1 = FunctionSpace(32, 'Fourier', dtype='D')
F2 = FunctionSpace(32, 'Fourier', dtype='d')
T = TensorProductSpace(comm, (C0, F1, F2))
u = TrialFunction(T)
v = TestFunction(T)
# Assemble left and right hand
f_hat = inner(v, Array(T, buffer=fe))
A = inner(v, div(grad(u)))
# Solve
solver = chebyshev.la.Helmholtz(*A) # Very fast O(N) solver
u_hat = Function(T)
u_hat = solver(f_hat, u_hat)
assert np.linalg.norm(u_hat.backward()-Array(T, buffer=ue)) < 1e-12
print(u_hat.shape)
Explanation: 3D Poisson (with MPI and Fourier x 2)
End of explanation
X = T.local_mesh()
ua = u_hat.backward()
plt.contourf(X[2][0, 0, :], X[0][:, 0, 0], ua[:, 2], 100)
plt.colorbar()
Explanation: Contour plot of slice with constant y
End of explanation
import subprocess
subprocess.check_output('mpirun -np 4 python poisson3D.py', shell=True)
Explanation: Run with MPI distribution of arrays
Here we would normally run from a bash shell
<p style="margin-bottom:0.5cm;">
<div style="color:black"> <strong>[bash shell] mpirun -np 4 python poisson3D.py </strong> </div>
But since we are in a Jupyter notebook lets actually do this from python in a live cell:-)
End of explanation
V = VectorSpace(T)
u = Array(V)
u[:] = np.random.random(u.shape)
w = np.sum(u*u, axis=0)
wh = Function(T)
wh = T.forward(w, wh)
Explanation: Note that Fourier bases are especially attractive because of features easily handled with MPI:
- diagonal matrices
- fast transforms
mpi4py-fft
<p style="margin-bottom:1cm;">
<div class="sl-block is-focused" data-block-type="image" style="min-width: 4px; min-height: 4px; width: 256px; height: 65px; left: 0px; top: 280px;" data-origin-id="e9caa44395810f9c496e1903dd61aba2"><img data-natural-width="1280" data-natural-height="325" style="" data-lazy-loaded="" src="https://s3.amazonaws.com/media-p.slid.es/uploads/92046/images/4253090/BitBucket_SVG_Logo.svg.png"></div>
by Mikael Mortensen and Lisandro Dalcin
Highly configurable Python package for distributing multidimensional arrays and for computing fast Fourier Transforms (FFTs) in parallel. Wraps [FFTW](http://www.fftw.org/) and lies at the core of `shenfun` and distributes large arrays.
<div>
<img src="https://rawcdn.githack.com/spectralDNS/spectralutilities/7777e58e1e81887149d1eaf6053e33769ee4a3f5/figures/pencil2.png" style="float:left" width=320> <img src="https://rawcdn.githack.com/spectralDNS/spectralutilities/7777e58e1e81887149d1eaf6053e33769ee4a3f5/figures/pencil3.png" style="float:right" width=320>
</div>
[![mpi4py-fft](https://anaconda.org/conda-forge/mpi4py-fft/badges/downloads.svg)](https://anaconda.org/conda-forge/mpi4py-fft)
# Nonlinearities and convolutions
All treated with pseudo-spectral techniques. For example
$$
\begin{align}
\hat{w}_k &= \widehat{\mathbf{u} \cdot \mathbf{u}}_k \\
&\text{or} \\
\hat{w}_k &= \widehat{|\nabla f|^2}_k
\end{align}
$$
Nonlinear terms are computed in real space and then forward transformed to spectral space.
3/2-rule or 2/3-rule possible for dealiasing of Fourier.
End of explanation
N = (40, 40)
D0X = FunctionSpace(N[0], 'Legendre', bc=(0, 0)) # For velocity components 0, 1
#D1Y = FunctionSpace(N[1], 'Legendre', bc=(0, 1)) # For velocity component 0
D1Y = FunctionSpace(N[1], 'Legendre', bc=(0, (1-x)**2*(1+x)**2)) # Regularized lid
D0Y = FunctionSpace(N[1], 'Legendre', bc=(0, 0)) # For velocity component 1
PX = FunctionSpace(N[0], 'Legendre')
PY = FunctionSpace(N[1], 'Legendre')
# All required spaces
V0 = TensorProductSpace(comm, (D0X, D1Y)) # velocity conponent 0
V1 = TensorProductSpace(comm, (D0X, D0Y)) # velocity component 1
Q = TensorProductSpace(comm, (PX, PY), modify_spaces_inplace=True) # pressure
V = VectorSpace([V0, V1]) # Velocity vector (V0 x V1)
VQ = CompositeSpace([V, Q]) # V x Q
PX.slice = lambda: slice(0, PX.N-2) # For inf-sup
PY.slice = lambda: slice(0, PY.N-2) # For inf-sup
# All required test and trial functions
up = TrialFunction(VQ)
vq = TestFunction(VQ)
u, p = up
v, q = vq
Explanation: Mixed tensor product spaces
Solve several equations simultaneously
Coupled equations
Block matrices and vectors
Tensor spaces of vectors, like velocity $u \in [\mathbb{R}^3]^3$
Stokes equations
lid-driven cavity - coupled solver
<p style="margin-bottom:0.25cm;">
$$
\begin{align*}
\nabla^2 \mathbf{u} - \nabla p &= \mathbf{f} \quad \text{in } \Omega, \quad \quad \Omega = [-1, 1]\times[-1, 1]\\
\nabla \cdot \mathbf{u} &= h \quad \text{in } \Omega \\
\int_{\Omega} p dx &= 0 \\
\mathbf{u}(\pm 1, y) = \mathbf{u}(x, -1) = (0, 0) &\text{ and }\mathbf{u}(x, 1) = (1, 0) \text{ or } ((1-x^2)(1+x^2), 0)
\end{align*}
$$
Given appropriate spaces $V$ and $Q$ a variational form reads: find $(\mathbf{u}, p) \in V \times Q$ such that
$$
\begin{equation}
a((\mathbf{u}, p), (\mathbf{v}, q)) = L((\mathbf{v}, q)) \quad \forall (\mathbf{v}, q) \in V \times Q
\end{equation}
$$
where bilinear and linear forms are, respectively
$$
\begin{equation}
a((\mathbf{u}, p), (\mathbf{v}, q)) = \int_{\Omega} (\nabla^2 \mathbf{u} - \nabla p) \cdot {\mathbf{v}} \, dx_w + \int_{\Omega} \nabla \cdot \mathbf{u} \, {q} \, dx_w,
\end{equation}
$$
$$
\begin{equation}
L((\mathbf{v}, q)) = \int_{\Omega} \mathbf{f} \cdot {\mathbf{v}}\, dx_w + \int_{\Omega} h {q} \, dx_w
\end{equation}
$$
Using integration by parts for Legendre (not really necessary, but looks nicer and more familiar:-)
$$
\begin{equation}
a((\mathbf{u}, p), (\mathbf{v}, q)) = -\int_{\Omega} \nabla \mathbf{u} \cdot \nabla{\mathbf{v}} \, dx_w + \int_{\Omega} \nabla \cdot \mathbf{v} \, {p} \, dx_w + \int_{\Omega} \nabla \cdot \mathbf{u} \, {q} \, dx_w,
\end{equation}
$$
# Implementation of spaces, basis functions
End of explanation
# Assemble matrices
A = inner(grad(v), -grad(u))
G = inner(div(v), p)
D = inner(q, div(u))
# Create Block matrix solver
sol = la.BlockMatrixSolver(A+G+D)
# Add Functions to hold solution and rhs
up_hat = Function(VQ).set_boundary_dofs()
fh_hat = Function(VQ)
# Solve Stokes problem. Note constraint for pressure
up_hat = sol(fh_hat, u=up_hat, constraints=((2, 0, 0),))
# Move solution to Array in real space
up = up_hat.backward()
u_, p_ = up
X = Q.local_mesh(True)
plt.quiver(X[0], X[1], u_[0], u_[1])
Explanation: Implementation Stokes - matrices and solve
End of explanation
%matplotlib notebook
plt.figure(figsize=(6,4))
plt.spy(sol.mat.diags(), markersize=0.5)
Explanation: Block matrix M
$$
M =
\begin{bmatrix}
A[0]+A[1] & 0 & G[0] \
0 & A[2]+A[3] & G[1] \
D[0] & D[1] & 0
\end{bmatrix}
$$
where $D = G^T$ for the Legendre basis, making $M$ symmetric. For Chebyshev $M$ will not be symmetric.
Solver through scipy.sparse.linalg
For Navier-Stokes of the lid-driven cavity, see https://github.com/spectralDNS/shenfun/blob/master/demo/NavierStokesDrivenCavity.py
Sparsity pattern
$$
M =
\begin{bmatrix}
A[0]+A[1] & 0 & G[0] \
0 & A[2]+A[3] & G[1] \
D[0] & D[1] & 0
\end{bmatrix}
$$
End of explanation |
573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data generation
@cesans
Step1: dc.data.get_trajectory can be used to get an optimal trajectory for some initial conditions
Step2: The trajectory can be visualized (xy) with dc.vis.vis_trajectory
Step3: Or all the variables and control with dc.vis.vis_control
Step4: Several random trajectories can be generated (in parallell) using a direct method with dc.data.generate_data
Step5: All trajectories can then be loaded with dc.data.load_trajectories | Python Code:
import matplotlib as plt
%matplotlib inline
import sys
sys.path.append('..')
import numpy as np
import deep_control as dc
Explanation: Data generation
@cesans
End of explanation
conditions = {'x0': 200, 'z0': 1000, 'vx0':-30, 'vz0': 0, 'theta0': 0, 'm0': 10000}
col_names = ['t', 'm', 'x', 'vx', 'z' , 'vz',' theta', 'u1', 'u2']
traj = dc.data.get_trajectory('../SpaceAMPL/lander/hs/main_rw_mass.mod', conditions, col_names=col_names)
Explanation: dc.data.get_trajectory can be used to get an optimal trajectory for some initial conditions
End of explanation
dc.vis.vis_trajectory(traj)
Explanation: The trajectory can be visualized (xy) with dc.vis.vis_trajectory
End of explanation
dc.vis.vis_control(traj,2)
Explanation: Or all the variables and control with dc.vis.vis_control
End of explanation
params = {'x0': (-1000,1000), 'z0': (500,2000), 'vx0': (-100,100), 'vz0': (-30,10), 'theta0': (-np.pi/20,np.pi/20), 'm0': (8000,12000)}
dc.data.generate_data('../SpaceAMPL/lander/hs/main_thrusters.mod', params, 100,10)
Explanation: Several random trajectories can be generated (in parallell) using a direct method with dc.data.generate_data
End of explanation
col_names = ['t', 'm', 'x', 'vx', 'z', 'vz', 'theta', 'vtheta', 'u1', 'uR', 'uL']
trajs = dc.data.load_trajectories('data/main_thrusters/', col_names = col_names)
trajs[0].head(5)
Explanation: All trajectories can then be loaded with dc.data.load_trajectories
End of explanation |
574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Insert / read whole numpy arrays
http
Step1: Create a DB/table for storing results
http
Step2: Insert a single row into the results table
Each insert is synchronous
This is safest, but is about 20 times (or more) slower than syncing once after all the inserts are performed (see below).
Step3: Read the row back to ensure the data is correct | Python Code:
def adapt_array(arr):
out = io.BytesIO()
np.save(out, arr)
out.seek(0)
return sqlite3.Binary(out.read())
def convert_array(text):
out = io.BytesIO(text)
out.seek(0)
return np.load(out)
# Converts np.array to TEXT when inserting
sqlite3.register_adapter(np.ndarray, adapt_array)
# Converts TEXT to np.array when selecting
sqlite3.register_converter("array", convert_array)
x = np.arange(12).reshape(2,6)
print x
con = sqlite3.connect(":memory:", detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute("create table test (arr array)")
cur.execute("insert into test (arr) values (?)", (x, ))
cur.execute("select arr from test")
data = cur.fetchone()[0]
print(data)
print type(data)
Explanation: Insert / read whole numpy arrays
http://stackoverflow.com/questions/18621513/python-insert-numpy-array-into-sqlite3-database
End of explanation
def create_or_open_db(db_file):
db_is_new = not os.path.exists(db_file)
con = sqlite3.connect(db_file, detect_types=sqlite3.PARSE_DECLTYPES)
if db_is_new:
print 'Creating results schema'
sql = '''CREATE TABLE IF NOT EXISTS results(
run_id TEXT,
run_step_num INTEGER,
theta23 REAL,
deltam31 REAL,
metric REAL,
minimizer_steps array,
PRIMARY KEY (run_id, run_step_num)
);'''
with con:
con.execute(sql)
print 'Creating config schema'
sql = '''CREATE TABLE IF NOT EXISTS config(
run_id TEXT PRIMARY KEY,
template_settings TEXT,
minimizer_settings TEXT,
grid_settings TEXT
);'''
with con:
con.execute(sql)
else:
print 'Schema exists\n'
return con
Explanation: Create a DB/table for storing results
http://www.numericalexpert.com/blog/sqlite_blob_time/sqlite_blob.html
End of explanation
rm ./test.db
np.random.seed(0)
con = create_or_open_db('./test.db')
sql_insert_data = '''INSERT INTO results VALUES (?,?,?,?,?,?);'''
n_inserts = 100
n_mod = 10
t0 = time.time()
for n in xrange(n_inserts):
if n % n_mod == 0:
GUTIL.wstdout('.')
input_data = (
'msu_0',
n,
1139.389,
0.723,
2e-3,
np.random.rand(100,6)
)
try:
with con:
con.execute(sql_insert_data, input_data)
except sqlite3.IntegrityError as e:
if not 'UNIQUE constraint failed' in e.args[0]:
raise
elif n % n_mod == 0:
GUTIL.wstdout('x')
dt = time.time()-t0
con.close()
GUTIL.wstdout(
'\n%s total (%s/insert)' %
(GUTIL.timediffstamp(dt), GUTIL.timediffstamp(dt/float(n_inserts)))
)
e.message
ls -hl ./test.db
rm ./test2.db
np.random.seed(0)
con = create_or_open_db('./test2.db')
sql_insert = '''INSERT INTO results VALUES (?,?,?,?,?,?);'''
t0=time.time()
with con:
for n in xrange(n_inserts):
if n % n_mod == 0:
GUTIL.wstdout('.')
input_data = (
'msu_0',
n,
1139.389,
0.723,
2e-3,
np.random.rand(100,6)
)
try:
con.execute(sql_insert, input_data)
except sqlite3.IntegrityError as e:
if not 'UNIQUE constraint failed' in e.args[0]:
raise
elif n % n_mod == 0:
GUTIL.wstdout('o')
dt = time.time()-t0
con.close()
GUTIL.wstdout(
'\n%s total (%s/insert)' %
(GUTIL.timediffstamp(dt), GUTIL.timediffstamp(dt/float(n_inserts)))
)
dt/n_inserts
ls -hl ./test2.db
Explanation: Insert a single row into the results table
Each insert is synchronous
This is safest, but is about 20 times (or more) slower than syncing once after all the inserts are performed (see below).
End of explanation
con = create_or_open_db('./test2.db')
con.row_factory = sqlite3.Row
sql = '''SELECT
metric, theta23, deltam31, run_id, run_step_num, minimizer_steps
FROM results'''
cursor = con.execute(sql)
for row in cursor:
print row.keys()[:-1]
print [x for x in row][:-1]
print 'shape of', row.keys()[-1], row['minimizer_steps'].shape
break
ls -hl ./test.db
a = row[-1]
Explanation: Read the row back to ensure the data is correct
End of explanation |
575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Eccentricity (Volume Conservation)
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Relevant Parameters
Step3: Relevant Constraints
Step4: Influence on Meshes (volume conservation)
Step5: Influence on Radial Velocities
Step6: Influence on Light Curves (fluxes) | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Eccentricity (Volume Conservation)
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
print b.get(qualifier='ecc')
print b.get(qualifier='ecosw', context='component')
print b.get(qualifier='esinw', context='component')
Explanation: Relevant Parameters
End of explanation
print b.get(qualifier='ecosw', context='constraint')
print b.get(qualifier='esinw', context='constraint')
Explanation: Relevant Constraints
End of explanation
b.add_dataset('mesh', times=np.linspace(0,1,11), columns=['volume'])
b.set_value('ecc', 0.2)
b.run_compute()
print b['volume@primary@model']
afig, mplfig = b['mesh01'].plot(x='times', y='volume', show=True)
b.remove_dataset('mesh01')
Explanation: Influence on Meshes (volume conservation)
End of explanation
b.add_dataset('rv', times=np.linspace(0,1,51))
b.run_compute()
afig, mplfig = b['rv@model'].plot(show=True)
b.remove_dataset('rv01')
Explanation: Influence on Radial Velocities
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,51))
b.run_compute()
afig, mplfig = b['lc@model'].plot(show=True)
Explanation: Influence on Light Curves (fluxes)
End of explanation |
576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<br>
Performing gauss, aperture and modelimg extractions with TDOSE<br>
Step1: Performing default aperture extraction using corresponding setup file.<br>
Hence, tdose will simply drop down apertures of the size specified in the<br>
setup and extract the spectrum within it.
Step2: Performing default gauss extraction using corresponding setup file.<br>
Hence, tdose will model the reference image using a single multi-variate<br>
Guassian for each object in the source catalog provided in the setup.
Step3: Performing default modelimg extraction using corresponding setup file.<br>
Hence, TDOSE will load reference image model (cube) directly from the<br>
specified loaction in the setup, and base the extraction and contamination<br>
handling on this model. In this case the model cubes were generated from<br>
galfit multi-sersic models. | Python Code:
print(' - Importing functions')
import glob
import tdose
import tdose_utilities as tu
workingdirectory = '../examples_workingdir'
setupname = 'Rafelski-MXDF_ZAP_COR_V2'
setupdir = workingdirectory+'tdose_setupfiles/'
Explanation: <br>
Performing gauss, aperture and modelimg extractions with TDOSE<br>
End of explanation
setups_aperture = setupdir+'tdose_setupfile_'+setupname+'*_aperture.txt'
tdose.perform_extraction(setupfile=setups_aperture[0],performcutout=False,generatesourcecat=True,
verbose=True,verbosefull=True,clobber=True,store1Dspectra=True,plot1Dspectra=True,
skipextractedobjects=False,logterminaloutput=False)
Explanation: Performing default aperture extraction using corresponding setup file.<br>
Hence, tdose will simply drop down apertures of the size specified in the<br>
setup and extract the spectrum within it.
End of explanation
setups_gauss = setupdir+'/tdose_setupfile_'+setupname+'*_gauss.txt'
tdose.perform_extraction(setupfile=setups_gauss[0],performcutout=False,generatesourcecat=True,
verbose=True,verbosefull=True,clobber=True,store1Dspectra=True,plot1Dspectra=True,
skipextractedobjects=False,logterminaloutput=False)
Explanation: Performing default gauss extraction using corresponding setup file.<br>
Hence, tdose will model the reference image using a single multi-variate<br>
Guassian for each object in the source catalog provided in the setup.
End of explanation
setups_modelimg = setupdir+'tdose_setupfile_'+setupname+'*_modelimg.txt'
tdose.perform_extraction(setupfile=setups_modelimg[0],performcutout=False,generatesourcecat=True,
verbose=True,verbosefull=True,clobber=True,store1Dspectra=True,plot1Dspectra=True,
skipextractedobjects=False,logterminaloutput=False)
Explanation: Performing default modelimg extraction using corresponding setup file.<br>
Hence, TDOSE will load reference image model (cube) directly from the<br>
specified loaction in the setup, and base the extraction and contamination<br>
handling on this model. In this case the model cubes were generated from<br>
galfit multi-sersic models.
End of explanation |
577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 08
Step1: 2. Example Network
In this tutorial, we use the Luxembourg SUMO Traffic (LuST) Scenario as an example use case. This example consists of a well-calibrated model of vehicles in Luxembourg. A representation of the simulation can be seen in the figure below.
<img src="img/LuST_network.png" width="500">
<center><b>Figure 2</b>
Step2: 3. Sumo Network Files
Sumo generates several network and simulation-specifc template files prior to starting a simulation. This procedure when creating custom scenarios and scenarios from OpenStreetMap is covered by the scenario class. Three of these files (*.net.xml, *.rou.xml, and vtype.add.xml) can be imported once again via the scenario class to recreate a previously decided scenario.
We start by creating the simulation parameters
Step3: 3.1 Importing Network (*.net.xml) Files
The *.net.xml file covers the network geometry within a simulation, and can be imported independently of the SUMO route file (see section 1.2). This can be done through the template parameter within NetParams as follows
Step4: This network alone, similar to the OpenStreetMap file, does not cover the placement of vehicles or the routes vehicles can traverse. These, however, can be defined a they were in the previous tutorial for importing networks from OpenStreetMap. For the LuST network, this looks something similar to the following code snippet (note that the specific edges were not spoken for any specific reason).
Step5: The simulation can then be executed as follows
Step6: 3.2 Importing Additional Files
Sumo templates will at times contain files other than the network templates that can be used to specify the positions, speeds, and properties of vehicles at the start of a simulation, as well as the departure times of vehicles while the scenario is running and the routes that all these vehicles are meant to traverse. All these files can also be imported under the template attribute in order to recreate the simulation in it's entirety.
When incorporating files other that the net.xml file to the simulation, the template attribute is treated as a dictionary instead, with a different element for each of the additional files that are meant to be imported. Starting with the net.xml file, it is added to the template attribute as follows
Step7: 3.2.1 Vehicle Type (vtype.add.xml)
The vehicle types file describing the properties of different vehicle types in the network. These include parameters such as the max acceleration and comfortable deceleration of drivers. This file can be imported via the "vtype" attribute in template.
Note that, when vehicle information is being imported from a template file, the VehicleParams object does not need be modified, unless you would like additionally vehicles to enter the network as well.
Step8: 3.2.2 Route (*.rou.xml)
Next, the routes can be imported from the *.rou.xml files that are generated by SUMO. These files help define which cars enter the network at which point in time, whether it be at the beginning of a simulation or some time during it run. The route files are passed to the "rou" key in the templates attribute. Moreover, since the vehicle routes can be spread over multiple files, the "rou" key that a list of string filenames.
Step9: 3.2.3 Running the Modified Simulation
Finally, the fully imported simulation can be run as follows.
Warning
Step10: 4. Aimsun Network Files
Flow can run templates that have been created in Aimsun and saved into an *.ang file. Although it is possible to have control over the network, for instance add vehicles and monitor them directly from Flow, this tutorial only covers how to run the network.
We will use the template located at tutorials/networks/test_template.ang, which looks like this
Step11: As you can see, we need to specify the name of the replication we want to run as well as the centroid configuration that is to be used. There is an other optional parameter, subnetwork_name, that can be specified if only part of the network should be simulated. Please refer to the documentation for more information.
The template can then be imported as follows
Step12: Finally, we can run the simulation by specifying 'aimsun' as the simulator to be used | Python Code:
# the TestEnv environment is used to simply simulate the network
from flow.envs import TestEnv
# the Experiment class is used for running simulations
from flow.core.experiment import Experiment
# the base scenario class
from flow.scenarios import Scenario
# all other imports are standard
from flow.core.params import VehicleParams
from flow.core.params import NetParams
from flow.core.params import InitialConfig
from flow.core.params import EnvParams
# create some default parameters parameters
env_params = EnvParams()
initial_config = InitialConfig()
vehicles = VehicleParams()
vehicles.add('human', num_vehicles=1)
Explanation: Tutorial 08: Networks from Custom Templates
In the previous tutorial, we discussed how OpenStreetMap files can be simulated in Flow. These networks, however, may at time be imperfect, as we can see in the toll section of the Bay Bridge (see the figure below). The simulators SUMO and Aimsun both possess methods for augmenting the network after they have been imported, and store the changes in their own versions of the initial template (whether it was generated via a custom scenario class or a network imported from OpenStreetMap). In order to utilize these newly generated networks, we demonstrate in this tutorial how simulator-generated template files can be imported when running a simulation in Flow.
<img src="img/osm_to_template.png">
<center> Figure 1: Example benefit of converting OpenStreetMap to a custom template </center>
The remainder of the tutorial is organized as follows. In section 1, we begin by importing the classic set of parameters. In section 2, we introduce the template files that are used as examples for importing the template files. In section 3, we present how custom SUMO network templates, i.e. the generated .net.xml files, can be modified and simulated in Flow for the purposed of improving network features. Finally, in section 4, we demonstrate how custom Aimsun network files can be simulated in Flow.
1. Importing Modules
Before we begin, let us import all relevant Flow parameters as we have done for previous tutorials. If you are unfamiliar with these parameters, you are encouraged to review tutorial 1.
End of explanation
LuST_dir = "/home/aboudy/LuSTScenario"
Explanation: 2. Example Network
In this tutorial, we use the Luxembourg SUMO Traffic (LuST) Scenario as an example use case. This example consists of a well-calibrated model of vehicles in Luxembourg. A representation of the simulation can be seen in the figure below.
<img src="img/LuST_network.png" width="500">
<center><b>Figure 2</b>: Simulation of the LuST network </center>
Before, continuing with this tutorial, please begin by cloning the LuST scenario repository by running the following command.
git clone https://github.com/lcodeca/LuSTScenario.git
Once you have cloned the repository, please modify the code snippet below to match correct location of the repository's main directory.
End of explanation
from flow.core.params import SumoParams
sim_params = SumoParams(render=True, sim_step=1)
Explanation: 3. Sumo Network Files
Sumo generates several network and simulation-specifc template files prior to starting a simulation. This procedure when creating custom scenarios and scenarios from OpenStreetMap is covered by the scenario class. Three of these files (*.net.xml, *.rou.xml, and vtype.add.xml) can be imported once again via the scenario class to recreate a previously decided scenario.
We start by creating the simulation parameters:
End of explanation
import os
net_params = NetParams(
template=os.path.join(LuST_dir, "scenario/lust.net.xml"),
)
Explanation: 3.1 Importing Network (*.net.xml) Files
The *.net.xml file covers the network geometry within a simulation, and can be imported independently of the SUMO route file (see section 1.2). This can be done through the template parameter within NetParams as follows:
End of explanation
# specify the edges vehicles can originate on
initial_config = InitialConfig(
edges_distribution=["-32410#3"]
)
# specify the routes for vehicles in the network
class TemplateScenario(Scenario):
def specify_routes(self, net_params):
return {"-32410#3": ["-32410#3"]}
Explanation: This network alone, similar to the OpenStreetMap file, does not cover the placement of vehicles or the routes vehicles can traverse. These, however, can be defined a they were in the previous tutorial for importing networks from OpenStreetMap. For the LuST network, this looks something similar to the following code snippet (note that the specific edges were not spoken for any specific reason).
End of explanation
# create the scenario
scenario = TemplateScenario(
name="template",
net_params=net_params,
initial_config=initial_config,
vehicles=vehicles
)
# create the environment
env = TestEnv(
env_params=env_params,
sim_params=sim_params,
scenario=scenario
)
# run the simulation for 1000 steps
exp = Experiment(env=env)
_ = exp.run(1, 1000)
Explanation: The simulation can then be executed as follows:
End of explanation
new_net_params = NetParams(
template={
# network geometry features
"net": os.path.join(LuST_dir, "scenario/lust.net.xml")
}
)
Explanation: 3.2 Importing Additional Files
Sumo templates will at times contain files other than the network templates that can be used to specify the positions, speeds, and properties of vehicles at the start of a simulation, as well as the departure times of vehicles while the scenario is running and the routes that all these vehicles are meant to traverse. All these files can also be imported under the template attribute in order to recreate the simulation in it's entirety.
When incorporating files other that the net.xml file to the simulation, the template attribute is treated as a dictionary instead, with a different element for each of the additional files that are meant to be imported. Starting with the net.xml file, it is added to the template attribute as follows:
End of explanation
new_net_params = NetParams(
template={
# network geometry features
"net": os.path.join(LuST_dir, "scenario/lust.net.xml"),
# features associated with the properties of drivers
"vtype": os.path.join(LuST_dir, "scenario/vtype.add.xml")
}
)
# we no longer need to specify anything in VehicleParams
new_vehicles = VehicleParams()
Explanation: 3.2.1 Vehicle Type (vtype.add.xml)
The vehicle types file describing the properties of different vehicle types in the network. These include parameters such as the max acceleration and comfortable deceleration of drivers. This file can be imported via the "vtype" attribute in template.
Note that, when vehicle information is being imported from a template file, the VehicleParams object does not need be modified, unless you would like additionally vehicles to enter the network as well.
End of explanation
new_net_params = NetParams(
template={
# network geometry features
"net": os.path.join(LuST_dir, "scenario/lust.net.xml"),
# features associated with the properties of drivers
"vtype": os.path.join(LuST_dir, "scenario/vtypes.add.xml"),
# features associated with the routes vehicles take
"rou": [os.path.join(LuST_dir, "scenario/DUARoutes/local.0.rou.xml"),
os.path.join(LuST_dir, "scenario/DUARoutes/local.1.rou.xml"),
os.path.join(LuST_dir, "scenario/DUARoutes/local.2.rou.xml")]
}
)
# we no longer need to specify anything in VehicleParams
new_vehicles = VehicleParams()
Explanation: 3.2.2 Route (*.rou.xml)
Next, the routes can be imported from the *.rou.xml files that are generated by SUMO. These files help define which cars enter the network at which point in time, whether it be at the beginning of a simulation or some time during it run. The route files are passed to the "rou" key in the templates attribute. Moreover, since the vehicle routes can be spread over multiple files, the "rou" key that a list of string filenames.
End of explanation
# create the scenario
scenario = Scenario(
name="template",
net_params=new_net_params,
vehicles=new_vehicles
)
# create the environment
env = TestEnv(
env_params=env_params,
sim_params=sim_params,
scenario=scenario
)
# run the simulation for 100000 steps
exp = Experiment(env=env)
_ = exp.run(1, 100000)
Explanation: 3.2.3 Running the Modified Simulation
Finally, the fully imported simulation can be run as follows.
Warning: the network takes time to initialize while the departure positions and times and vehicles are specified.
End of explanation
from flow.core.params import AimsunParams
sim_params = AimsunParams(
sim_step=0.1,
render=True,
emission_path='data',
replication_name="Replication 930",
centroid_config_name="Centroid Configuration 910"
)
Explanation: 4. Aimsun Network Files
Flow can run templates that have been created in Aimsun and saved into an *.ang file. Although it is possible to have control over the network, for instance add vehicles and monitor them directly from Flow, this tutorial only covers how to run the network.
We will use the template located at tutorials/networks/test_template.ang, which looks like this:
<img src="img/test_template.png">
<center><b>Figure 2</b>: Simulation of <code>test_template.ang</code> in Aimsun</center>
It contains two input and three output centroids that define the centroid configuration Centroid Configuration 910. The inflows are defined by two OD matrices, one for the type Car (in blue), the other for the type rl (in red). Note that there is no learning in this tutorial so the two types both act as regular cars. The two OD matrices form the traffic demand Traffic Demand 925 that is used by the scenario Dynamic Scenario 927. Finally, the experiment Micro SRC Experiment 928 and the replication Replication 930 are created, and we will run this replication in the following.
First, we create the Aimsun-specific simulation parameters:
End of explanation
import os
import flow.config as config
net_params = NetParams(
template=os.path.join(config.PROJECT_PATH,
"tutorials/networks/test_template.ang")
)
Explanation: As you can see, we need to specify the name of the replication we want to run as well as the centroid configuration that is to be used. There is an other optional parameter, subnetwork_name, that can be specified if only part of the network should be simulated. Please refer to the documentation for more information.
The template can then be imported as follows:
End of explanation
scenario = Scenario(
name="template",
net_params=net_params,
initial_config=initial_config,
vehicles=vehicles
)
env = TestEnv(
env_params,
sim_params,
scenario,
simulator='aimsun'
)
exp = Experiment(env)
exp.run(1, 1000)
Explanation: Finally, we can run the simulation by specifying 'aimsun' as the simulator to be used:
End of explanation |
578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modified introduction using forex data
This is the trading rule example shown in the introduction but modified to use Interactive Brokers instead of CSV files as data source.
IB requires a minimum equity and a monthly subscription to provide historical data on future contracts. This example was modified to use FX prices instead futures to make it runnable with free unfunded paper trading accounts. Note that Rob does not recommend trading FX spot data with IB due to their high fees.
First, import the required packages and initialize ib_insync.
Step1: Connecting to Interactive Brokers gateway...
Step2: See what fx instruments we have configured. These are configured in sysbrokers/IB/ib_config_spot_FX.csv
Step3: Now we select one instrument (EURUSD) and try to fetch historical data for it.
Step4: Data can also be indexed as a python dict
Step6: Create the trading rule
Step7: Run a forecast with the previous rule
Step8: The original introduction jumps directly to "Did we make any money?".
I would like to see here the orders that were triggered by this forecast, but instead we jump directly into P&L. Still, these are the P&L numbers for this forecast and data | Python Code:
from sysbrokers.IB.ib_connection import connectionIB
from sysbrokers.IB.ib_Fx_prices_data import ibFxPricesData
from ib_insync import util
util.startLoop() #only required when running inside a notebook
Explanation: Modified introduction using forex data
This is the trading rule example shown in the introduction but modified to use Interactive Brokers instead of CSV files as data source.
IB requires a minimum equity and a monthly subscription to provide historical data on future contracts. This example was modified to use FX prices instead futures to make it runnable with free unfunded paper trading accounts. Note that Rob does not recommend trading FX spot data with IB due to their high fees.
First, import the required packages and initialize ib_insync.
End of explanation
conn = connectionIB(111)
conn
Explanation: Connecting to Interactive Brokers gateway...
End of explanation
ibfxpricedata = ibFxPricesData(conn)
ibfxpricedata.get_list_of_fxcodes()
Explanation: See what fx instruments we have configured. These are configured in sysbrokers/IB/ib_config_spot_FX.csv
End of explanation
ibfxpricedata.get_fx_prices('EURUSD')
Explanation: Now we select one instrument (EURUSD) and try to fetch historical data for it.
End of explanation
ibfxpricedata['JPYUSD']
Explanation: Data can also be indexed as a python dict:
End of explanation
import pandas as pd
from sysquant.estimators.vol import robust_vol_calc
def calc_ewmac_forecast(price, Lfast, Lslow=None):
Calculate the ewmac trading rule forecast, given a price and EWMA speeds Lfast, Lslow and vol_lookback
if Lslow is None:
Lslow = 4 * Lfast
## We don't need to calculate the decay parameter, just use the span directly
fast_ewma = price.ewm(span=Lfast).mean()
slow_ewma = price.ewm(span=Lslow).mean()
raw_ewmac = fast_ewma - slow_ewma
vol = robust_vol_calc(price.diff())
return raw_ewmac / vol
Explanation: Create the trading rule
End of explanation
price=ibfxpricedata['EURUSD']
ewmac=calc_ewmac_forecast(price, 32, 128)
ewmac.tail(5)
import matplotlib.pyplot as plt
plt.figure(figsize=(12,5))
ax1 = price.plot(color='blue', grid=True, label='Forecast')
ax2 = ewmac.plot(color='red', grid=True, secondary_y=True, label='Price')
h1, l1 = ax1.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
plt.legend(h1+h2, l1+l2, loc=2)
plt.show()
Explanation: Run a forecast with the previous rule
End of explanation
from systems.accounts.account_forecast import pandl_for_instrument_forecast
account = pandl_for_instrument_forecast(forecast = ewmac, price = price)
account.percent.stats()
account.curve().plot()
plt.show()
conn.close_connection()
Explanation: The original introduction jumps directly to "Did we make any money?".
I would like to see here the orders that were triggered by this forecast, but instead we jump directly into P&L. Still, these are the P&L numbers for this forecast and data:
End of explanation |
579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
${t\bar{t}H\left(b\bar{b}\right)}$ scikit-learn BDT for classification of ${t\bar{t}H}$ and ${t\bar{t}b\bar{b}}$ events
For each signal region, information from the output of the reconstruction BDT is combined with kinematic variables for input to classification BDTs, with ${t\bar{t}H \left(H\to b\bar{b}\right)}$ as signal and ${t\bar{t}}$ as background. There is one BDT trained for events with exactly 5 jets or at least 6 jets.
Step1: read
Step2: features and targets
Step3: accuracy | Python Code:
import datetime
import graphviz
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
plt.rcParams["figure.figsize"] = (17, 10)
import pandas as pd
import seaborn as sns
sns.set(context = "paper", font = "monospace")
import sklearn.datasets
from sklearn.preprocessing import MinMaxScaler
import sklearn.tree
import sqlite3
import warnings
warnings.filterwarnings("ignore")
pd.set_option("display.max_rows", 500)
pd.set_option("display.max_columns", 500)
Explanation: ${t\bar{t}H\left(b\bar{b}\right)}$ scikit-learn BDT for classification of ${t\bar{t}H}$ and ${t\bar{t}b\bar{b}}$ events
For each signal region, information from the output of the reconstruction BDT is combined with kinematic variables for input to classification BDTs, with ${t\bar{t}H \left(H\to b\bar{b}\right)}$ as signal and ${t\bar{t}}$ as background. There is one BDT trained for events with exactly 5 jets or at least 6 jets.
End of explanation
df = pd.read_csv("ttHbb_data.csv")
df.head()
Explanation: read
End of explanation
features = list(df.columns[:-1])
X = df[features]
y = df["target"]
classifier = sklearn.tree.DecisionTreeClassifier(min_samples_split = 20, random_state = 99, max_depth = 5)
classifier.fit(X, y)
graph = graphviz.Source(
sklearn.tree.export_graphviz(
classifier,
out_file = None,
feature_names = list(df[features].columns.values),
filled = True,
rounded = True,
special_characters = True
)
)
graph
Explanation: features and targets
End of explanation
y_predictions = classifier.predict(X)
y_predictions
sklearn.metrics.accuracy_score(y, y_predictions)
_df = pd.DataFrame()
_df["variable"] = X.columns.values
_df["importance"] = classifier.feature_importances_
_df.index = _df["variable"].values
del _df["variable"]
_df = _df.sort_values(by = "importance", ascending = False)
_df
plt.rcParams["figure.figsize"] = (17, 10)
_df.sort_values(by = "importance", ascending = True).plot(kind = "barh", legend = "False");
Explanation: accuracy
End of explanation |
580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Método de la secante
El método de la secante es una extensión del método de Newton-Raphson, la derivada de la función se calcula usando una diferencia finita hacia atrás
\begin{equation}
f'(x_{i}) = \frac{f(x_{i-1}) - f(x_{i})}{x_{i-1} - x_{i}}
\end{equation}
y se reemplaza en la fórmula del método de Newton-Raphson
\begin{equation}
x_{i+1} = x_{i} - \frac{1}{f'(x_{i})} f(x_{i}) = x_{i} - \frac{x_{i-1} - x_{i}}{f(x_{i-1}) - f(x_{i})} f(x_{i})
\end{equation}
Algoritmo
x_-1 es la raiz aproximada anterior
x_0 es la raiz aproximada actual
x_1 = x_0 - f(x_0)*(x_-1 - x_0)/f(x_-1) - f(x_0)
x_2 = x_1 - f(x_1)*(x_0 - x_1)/f(x_0) - f(x_1)
x_3 = x_2 - f(x_2)*(x_1 - x_2)/f(x_1) - f(x_2)
...
Ejemplo 1
Encontrar la raiz de
\begin{equation}
y = x^{5} + x^{3} + 3
\end{equation}
usar $x = 0$ y $x = -1$ como valores iniciales
Iteración 0
Raíz aproximada anterior
\begin{equation}
x_{-1} = 0
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{0} = -1
\end{equation}
Error relativo
\begin{equation}
e_{r} = ?
\end{equation}
Iteración 1
Calculando las ordenadas en los puntos anteriores
\begin{align}
f(x_{-1}) &= f(0) = 3 \
f(x_{0}) &= f(-1) = 1
\end{align}
Raíz aproximada anterior
\begin{equation}
x_{0} = -1
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{1} = x_{0} - \frac{x_{-1} - x_{0}}{f(x_{-1}) - f(x_{0})} f(x_{0}) = -1 - \frac{0 - (-1)}{3 - 1} 1 = -1.5
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{1} - x_{0}}{x_{1}}\bigg| \times 100\% = \bigg|\frac{-1.5 - (-1)}{-1.5}\bigg| \times 100\% = 33.33\%
\end{equation}
Iteración 2
Calculando las ordenadas en los puntos anteriores
\begin{align}
f(x_{0}) &= f(-1) = 1 \
f(x_{1}) &= f(-1.5) = -7.96875
\end{align}
Raíz aproximada anterior
\begin{equation}
x_{1} = -1.5
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{2} = x_{1} - \frac{x_{0} - x_{1}}{f(x_{0}) - f(x_{1})} f(x_{1}) = -1.5 - \frac{-1 - (-1.5)}{1 - (-7.96875)} (-7.96875) = -1.055749
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{2} - x_{1}}{x_{2}}\bigg| \times 100\% = \bigg|\frac{-1.055749 - (-1.5)}{-1.055749}\bigg| \times 100\% = 42.08\%
\end{equation}
Iteración 3
Calculando las ordenadas en los puntos anteriores
\begin{align}
f(x_{1}) &= f(-1.5) = -7.96875 \
f(x_{2}) &= f(-1.055749) = 0.511650
\end{align}
Raíz aproximada anterior
\begin{equation}
x_{2} = -1.055749
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{3} = x_{2} - \frac{x_{1} - x_{2}}{f(x_{1}) - f(x_{2})} f(x_{2}) = -1.055749 - \frac{-1.5 - (-1.055749)}{-7.96875 - 0.511650} 0.511650 = -1.082552
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{3} - x_{2}}{x_{3}}\bigg| \times 100\% = \bigg|\frac{-1.082552 - (-1.055749)}{-1.082552}\bigg| \times 100\% = 2.48\%
\end{equation}
Implementación de funciones auxiliares
Seudocódigo para la derivada
pascal
function diferencia_atras(f(x), x_0, x_1)
f'(x) = f(x_0) - f(x_1)/x_0 - x_1
return f'(x)
end function
Seudocódigo para obtener las últimas dos raices
pascal
function raiz(f(x), a, b)
Step1: Implementación no vectorizada
Seudocódigo
pascal
function secante(f(x), x_0, x_1)
x_anterior = x_0
x_actual = x_1
error_permitido = 0.000001
while(True)
x_anterior, x_actual = raiz(f(x), x_anterior, x_actual)
if x_raiz_actual != 0
error_relativo = abs((x_raiz_actual - x_raiz_anterior)/x_raiz_actual)*100
end if
if error_relativo < error_permitido
exit
end if
end while
mostrar x_actual
end function
o también
pascal
function secante(f(x), x_0, x_1)
x_anterior = x_0
x_actual = x_1
for 1 to maxima_iteracion do
x_anterior, x_actual = raiz(f(x), x_anterior, x_actual)
end for
mostrar x_actual
end function
Step2: Ejemplo 2
Encontrar la raiz de
\begin{equation}
y = x^{5} + x^{3} + 3
\end{equation}
usar $x_{-1} = 0$ y $x_{0} = -1$
Step3: Ejemplo 3
Encontrar la raiz de
\begin{equation}
y = x^{5} + x^{3} + 3
\end{equation}
usar $x_{-1} = 0$ y $x_{0} = -0.5$ | Python Code:
def diferencia_atras(f, x_0, x_1):
pendiente = (f(x_0) - f(x_1))/(x_0 - x_1)
return pendiente
def raiz(f, a, b):
c = b - f(b)/diferencia_atras(f, a, b)
return b, c
Explanation: Método de la secante
El método de la secante es una extensión del método de Newton-Raphson, la derivada de la función se calcula usando una diferencia finita hacia atrás
\begin{equation}
f'(x_{i}) = \frac{f(x_{i-1}) - f(x_{i})}{x_{i-1} - x_{i}}
\end{equation}
y se reemplaza en la fórmula del método de Newton-Raphson
\begin{equation}
x_{i+1} = x_{i} - \frac{1}{f'(x_{i})} f(x_{i}) = x_{i} - \frac{x_{i-1} - x_{i}}{f(x_{i-1}) - f(x_{i})} f(x_{i})
\end{equation}
Algoritmo
x_-1 es la raiz aproximada anterior
x_0 es la raiz aproximada actual
x_1 = x_0 - f(x_0)*(x_-1 - x_0)/f(x_-1) - f(x_0)
x_2 = x_1 - f(x_1)*(x_0 - x_1)/f(x_0) - f(x_1)
x_3 = x_2 - f(x_2)*(x_1 - x_2)/f(x_1) - f(x_2)
...
Ejemplo 1
Encontrar la raiz de
\begin{equation}
y = x^{5} + x^{3} + 3
\end{equation}
usar $x = 0$ y $x = -1$ como valores iniciales
Iteración 0
Raíz aproximada anterior
\begin{equation}
x_{-1} = 0
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{0} = -1
\end{equation}
Error relativo
\begin{equation}
e_{r} = ?
\end{equation}
Iteración 1
Calculando las ordenadas en los puntos anteriores
\begin{align}
f(x_{-1}) &= f(0) = 3 \
f(x_{0}) &= f(-1) = 1
\end{align}
Raíz aproximada anterior
\begin{equation}
x_{0} = -1
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{1} = x_{0} - \frac{x_{-1} - x_{0}}{f(x_{-1}) - f(x_{0})} f(x_{0}) = -1 - \frac{0 - (-1)}{3 - 1} 1 = -1.5
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{1} - x_{0}}{x_{1}}\bigg| \times 100\% = \bigg|\frac{-1.5 - (-1)}{-1.5}\bigg| \times 100\% = 33.33\%
\end{equation}
Iteración 2
Calculando las ordenadas en los puntos anteriores
\begin{align}
f(x_{0}) &= f(-1) = 1 \
f(x_{1}) &= f(-1.5) = -7.96875
\end{align}
Raíz aproximada anterior
\begin{equation}
x_{1} = -1.5
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{2} = x_{1} - \frac{x_{0} - x_{1}}{f(x_{0}) - f(x_{1})} f(x_{1}) = -1.5 - \frac{-1 - (-1.5)}{1 - (-7.96875)} (-7.96875) = -1.055749
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{2} - x_{1}}{x_{2}}\bigg| \times 100\% = \bigg|\frac{-1.055749 - (-1.5)}{-1.055749}\bigg| \times 100\% = 42.08\%
\end{equation}
Iteración 3
Calculando las ordenadas en los puntos anteriores
\begin{align}
f(x_{1}) &= f(-1.5) = -7.96875 \
f(x_{2}) &= f(-1.055749) = 0.511650
\end{align}
Raíz aproximada anterior
\begin{equation}
x_{2} = -1.055749
\end{equation}
Raíz aproximada actual
\begin{equation}
x_{3} = x_{2} - \frac{x_{1} - x_{2}}{f(x_{1}) - f(x_{2})} f(x_{2}) = -1.055749 - \frac{-1.5 - (-1.055749)}{-7.96875 - 0.511650} 0.511650 = -1.082552
\end{equation}
Error relativo
\begin{equation}
e_{r} = \bigg|\frac{x_{3} - x_{2}}{x_{3}}\bigg| \times 100\% = \bigg|\frac{-1.082552 - (-1.055749)}{-1.082552}\bigg| \times 100\% = 2.48\%
\end{equation}
Implementación de funciones auxiliares
Seudocódigo para la derivada
pascal
function diferencia_atras(f(x), x_0, x_1)
f'(x) = f(x_0) - f(x_1)/x_0 - x_1
return f'(x)
end function
Seudocódigo para obtener las últimas dos raices
pascal
function raiz(f(x), a, b):
c = b - f(b)/diferencia_atras(f(x), a, b)
return b, c
end function
End of explanation
def secante(f, x_0, x_1):
print("{0:s} \t {1:15s} \t {2:15s} \t {3:15s}".format('i', 'x anterior', 'x actual', 'error relativo %'))
x_anterior = x_0
x_actual = x_1
i = 0
print("{0:d} \t {1:.15f} \t {2:.15f} \t {3:15s}".format(i, x_anterior, x_actual, '???????????????'))
error_permitido = 0.000001
while True:
x_anterior, x_actual = raiz(f, x_anterior, x_actual)
if x_actual != 0:
error_relativo = abs((x_actual - x_anterior)/x_actual)*100
i = i + 1
print("{0:d} \t {1:.15f} \t {2:.15f} \t {3:15.11f}".format(i, x_anterior, x_actual, error_relativo))
if (error_relativo < error_permitido) or (i>=20):
break
print('\nx =', x_actual)
Explanation: Implementación no vectorizada
Seudocódigo
pascal
function secante(f(x), x_0, x_1)
x_anterior = x_0
x_actual = x_1
error_permitido = 0.000001
while(True)
x_anterior, x_actual = raiz(f(x), x_anterior, x_actual)
if x_raiz_actual != 0
error_relativo = abs((x_raiz_actual - x_raiz_anterior)/x_raiz_actual)*100
end if
if error_relativo < error_permitido
exit
end if
end while
mostrar x_actual
end function
o también
pascal
function secante(f(x), x_0, x_1)
x_anterior = x_0
x_actual = x_1
for 1 to maxima_iteracion do
x_anterior, x_actual = raiz(f(x), x_anterior, x_actual)
end for
mostrar x_actual
end function
End of explanation
def f(x):
# f(x) = x^5 + x^3 + 3
y = x**5 + x**3 + 3
return y
diferencia_atras(f, 0, -1)
raiz(f, 0, -1)
secante(f, 0, -1)
Explanation: Ejemplo 2
Encontrar la raiz de
\begin{equation}
y = x^{5} + x^{3} + 3
\end{equation}
usar $x_{-1} = 0$ y $x_{0} = -1$
End of explanation
secante(f, 0, -0.5)
Explanation: Ejemplo 3
Encontrar la raiz de
\begin{equation}
y = x^{5} + x^{3} + 3
\end{equation}
usar $x_{-1} = 0$ y $x_{0} = -0.5$
End of explanation |
581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 양자화 인식 훈련 종합 가이드
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 양자화 인식 모델을 정의합니다.
다음과 같은 방법으로 모델을 정의하면 개요 페이지에 나열된 백엔드에 배포할 수 있는 경로가 있습니다. 기본적으로 8bit 양자화가 사용됩니다.
참고
Step3: 일부 레이어 양자화하기
모델을 양자화하면 정확성에 부정적인 영향을 미칠 수 있습니다. 모델의 레이어를 선택적으로 양자화하여 정확성, 속도 및 모델 크기 간의 균형을 탐색할 수 있습니다.
사용 사례
Step4: 이 예에서는 레이어 유형을 사용하여 양자화할 레이어를 결정했지만, 특정 레이어를 양자화하는 가장 쉬운 방법은 name 속성을 설정하고 clone_function에서 해당 이름을 찾는 것입니다.
Step5: 읽기 더 쉽지만 잠재적으로 모델 정확성이 낮음
양자화 인식 훈련을 통한 미세 조정과 호환되지 않으므로 위의 예보다 정확성이 떨어질 수 있습니다.
함수형 예
Step6: 순차 예
Step7: 체크포인트 및 역직렬화
사용 사례
Step8: 양자화된 모델 생성 및 배포하기
일반적으로, 사용할 배포 백엔드에 대한 설명서를 참조하세요.
다음은 TFLite 백엔드의 예입니다.
Step9: 양자화 실험하기
사용 사례
Step10: 사용자 정의 Keras 레이어 양자화하기
이 예제에서는 DefaultDenseQuantizeConfig를 사용하여 CustomLayer를 양자화합니다.
구성 적용은 "양자화 실험하기" 사용 사례에서와 같습니다.
CustomLayer에 tfmot.quantization.keras.quantize_annotate_layer를 적용하고 QuantizeConfig를 전달합니다.
tfmot.quantization.keras.quantize_annotate_model을 사용하여 API 기본값으로 나머지 모델을 계속 양자화합니다.
Step11: 양자화 매개변수 수정하기
일반적인 실수
Step12: 구성 적용은 "양자화 실험하기" 사용 사례에서와 같습니다.
Apply tfmot.quantization.keras.quantize_annotate_layer to the Dense layer and pass in the QuantizeConfig.
tfmot.quantization.keras.quantize_annotate_model을 사용하여 API 기본값으로 나머지 모델을 계속 양자화합니다.
Step13: 양자화할 일부 레이어 수정하기
이 예제에서는 활성화 양자화를 건너뛰도록 Dense 레이어를 수정합니다. 나머지 모델은 계속해서 API 기본값을 사용합니다.
Step14: 구성 적용은 "양자화 실험하기" 사용 사례에서와 같습니다.
Dense 레이어에 tfmot.quantization.keras.quantize_annotate_layer를 적용하고 QuantizeConfig를 전달합니다.
tfmot.quantization.keras.quantize_annotate_model을 사용하여 API 기본값으로 나머지 모델을 계속 양자화합니다.
Step16: 사용자 정의 양자화 알고리즘 사용하기
tfmot.quantization.keras.quantizers.Quantizer 클래스는 입력에 모든 알고리즘을 적용할 수 있는 callable입니다.
이 예에서 입력은 가중치이며 FixedRangeQuantizer call 함수의 수학을 가중치에 적용합니다. 원래 가중치 값 대신 FixedRangeQuantizer의 출력이 이제 가중치를 사용하는 모든 항목으로 전달됩니다.
Step17: Applying the configuration is the same across the "Experiment with quantization" use cases.
Dense 레이어에 tfmot.quantization.keras.quantize_annotate_layer를 적용하고 QuantizeConfig를 전달합니다.
Use tfmot.quantization.keras.quantize_annotate_model to continue to quantize the rest of the model with the API defaults. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
! pip uninstall -y tensorflow
! pip install -q tf-nightly
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tensorflow_model_optimization as tfmot
import tempfile
input_shape = [20]
x_train = np.random.randn(1, 20).astype(np.float32)
y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20)
def setup_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, input_shape=input_shape),
tf.keras.layers.Flatten()
])
return model
def setup_pretrained_weights():
model= setup_model()
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy']
)
model.fit(x_train, y_train)
_, pretrained_weights = tempfile.mkstemp('.tf')
model.save_weights(pretrained_weights)
return pretrained_weights
def setup_pretrained_model():
model = setup_model()
pretrained_weights = setup_pretrained_weights()
model.load_weights(pretrained_weights)
return model
setup_model()
pretrained_weights = setup_pretrained_weights()
Explanation: 양자화 인식 훈련 종합 가이드
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/model_optimization/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/model_optimization/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/model_optimization/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
Keras 양자화 인식 훈련에 관한 종합 가이드를 시작합니다.
이 페이지는 다양한 사용 사례를 문서화하고 각각에 대해 API를 사용하는 방법을 보여줍니다. 필요한 API를 알고 나면, API 문서에서 매개변수와 하위 수준의 세부 정보를 찾아보세요.
양자화 인식 훈련의 이점과 지원되는 기능을 보려면 개요를 참조하세요.
단일 엔드 투 엔드 예제는 양자화 인식 훈련 예제를 참조하세요.
다음 사용 사례를 다룹니다.
다음 단계에 따라 8bit 양자화로 모델을 배포합니다.
양자화 인식 모델을 정의합니다.
Keras HDF5 모델의 경우에만 특수 체크포인트 및 역직렬화 로직을 사용합니다. 그렇지 않으면 훈련이 표준입니다.
양자화 인식 모델에서 양자화 모델을 만듭니다.
양자화로 실험합니다.
실험용으로 지원되는 배포 경로가 없습니다.
사용자 정의 Keras 레이어는 실험 중입니다.
설정
필요한 API를 찾고 목적을 이해하기 위해 실행할 수 있지만, 이 섹션은 건너뛸 수 있습니다.
End of explanation
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
quant_aware_model.summary()
Explanation: 양자화 인식 모델을 정의합니다.
다음과 같은 방법으로 모델을 정의하면 개요 페이지에 나열된 백엔드에 배포할 수 있는 경로가 있습니다. 기본적으로 8bit 양자화가 사용됩니다.
참고: 양자화 인식 모델은 실제로 양자화되지 않습니다. 양자화된 모델을 만드는 것은 별도의 단계입니다.
전체 모델 양자화하기
사용 사례:
하위 클래스화된 모델은 지원되지 않습니다.
모델 정확성의 향상을 위한 팁:
정확성을 가장 많이 떨어뜨리는 레이어 양자화를 건너뛰려면 "일부 레이어 양자화"를 시도하세요.
일반적으로 처음부터 훈련하는 것보다 양자화 인식 훈련으로 미세 조정하는 것이 좋습니다.
전체 모델이 양자화를 인식하도록 하려면, tfmot.quantization.keras.quantize_model을 모델에 적용합니다.
End of explanation
# Create a base model
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
# Helper function uses `quantize_annotate_layer` to annotate that only the
# Dense layers should be quantized.
def apply_quantization_to_dense(layer):
if isinstance(layer, tf.keras.layers.Dense):
return tfmot.quantization.keras.quantize_annotate_layer(layer)
return layer
# Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense`
# to the layers of the model.
annotated_model = tf.keras.models.clone_model(
base_model,
clone_function=apply_quantization_to_dense,
)
# Now that the Dense layers are annotated,
# `quantize_apply` actually makes the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
Explanation: 일부 레이어 양자화하기
모델을 양자화하면 정확성에 부정적인 영향을 미칠 수 있습니다. 모델의 레이어를 선택적으로 양자화하여 정확성, 속도 및 모델 크기 간의 균형을 탐색할 수 있습니다.
사용 사례:
완전히 양자화된 모델(예: EdgeTPU v1, 대부분의 DSP)에서만 잘 동작하는 백엔드에 배포하려면 "전체 모델 양자화하기"를 시도하세요.
모델 정확성의 향상을 위한 팁:
일반적으로 처음부터 훈련하는 것보다 양자화 인식 훈련으로 미세 조정하는 것이 좋습니다.
첫 번째 레이어 대신 이후 레이어를 양자화해보세요.
중요 레이어(예: attention 메커니즘)는 양자화하지 마세요.
아래 예에서는 Dense 레이어만 양자화합니다.
End of explanation
print(base_model.layers[0].name)
Explanation: 이 예에서는 레이어 유형을 사용하여 양자화할 레이어를 결정했지만, 특정 레이어를 양자화하는 가장 쉬운 방법은 name 속성을 설정하고 clone_function에서 해당 이름을 찾는 것입니다.
End of explanation
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
i = tf.keras.Input(shape=(20,))
x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i)
o = tf.keras.layers.Flatten()(x)
annotated_model = tf.keras.Model(inputs=i, outputs=o)
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
# For deployment purposes, the tool adds `QuantizeLayer` after `InputLayer` so that the
# quantized model can take in float inputs instead of only uint8.
quant_aware_model.summary()
Explanation: 읽기 더 쉽지만 잠재적으로 모델 정확성이 낮음
양자화 인식 훈련을 통한 미세 조정과 호환되지 않으므로 위의 예보다 정확성이 떨어질 수 있습니다.
함수형 예
End of explanation
# Use `quantize_annotate_layer` to annotate that the `Dense` layer
# should be quantized.
annotated_model = tf.keras.Sequential([
tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)),
tf.keras.layers.Flatten()
])
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
quant_aware_model.summary()
Explanation: 순차 예
End of explanation
# Define the model.
base_model = setup_model()
base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Save or checkpoint the model.
_, keras_model_file = tempfile.mkstemp('.h5')
quant_aware_model.save(keras_model_file)
# `quantize_scope` is needed for deserializing HDF5 models.
with tfmot.quantization.keras.quantize_scope():
loaded_model = tf.keras.models.load_model(keras_model_file)
loaded_model.summary()
Explanation: 체크포인트 및 역직렬화
사용 사례: 이 코드는 HDF5 모델 형식(HDF5 가중치 또는 기타 형식이 아님)에만 필요합니다.
End of explanation
base_model = setup_pretrained_model()
quant_aware_model = tfmot.quantization.keras.quantize_model(base_model)
# Typically you train the model here.
converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
Explanation: 양자화된 모델 생성 및 배포하기
일반적으로, 사용할 배포 백엔드에 대한 설명서를 참조하세요.
다음은 TFLite 백엔드의 예입니다.
End of explanation
LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer
MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer
class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig):
# Configure how to quantize weights.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))]
# Configure how to quantize activations.
def get_activations_and_quantizers(self, layer):
return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))]
def set_quantize_weights(self, layer, quantize_weights):
# Add this line for each item returned in `get_weights_and_quantizers`
# , in the same order
layer.kernel = quantize_weights[0]
def set_quantize_activations(self, layer, quantize_activations):
# Add this line for each item returned in `get_activations_and_quantizers`
# , in the same order.
layer.activation = quantize_activations[0]
# Configure how to quantize outputs (may be equivalent to activations).
def get_output_quantizers(self, layer):
return []
def get_config(self):
return {}
Explanation: 양자화 실험하기
사용 사례: 다음 API를 사용하면 지원되는 배포 경로가 없습니다. 이들 기능은 실험적이며 이전 버전과의 호환성이 적용되지 않습니다.
tfmot.quantization.keras.QuantizeConfig
tfmot.quantization.keras.quantizers.Quantizer
tfmot.quantization.keras.quantizers.LastValueQuantizer
tfmot.quantization.keras.quantizers.MovingAverageQuantizer
설정: DefaultDenseQuantizeConfig
실험하려면 레이어의 가중치, 활성화 및 출력을 양자화하는 방법을 설명하는 tfmot.quantization.keras.QuantizeConfig를 사용해야 합니다.
아래는 API 기본값에서 Dense 레이어에 사용되는 같은 QuantizeConfig를 정의하는 예입니다.
이 예제에서 순방향 전파 중에 get_weights_and_quantizers에서 반환된 LastValueQuantizer는 입력으로 layer.kernel을 사용하여 호출되어 출력이 생성됩니다. 출력은 set_quantize_weights에 정의된 로직을 통해 Dense 레이어의 원래 순방향 전파에서 layer.kernel을 대체합니다. 같은 아이디어가 활성화 및 출력에 적용됩니다.
End of explanation
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class CustomLayer(tf.keras.layers.Dense):
pass
model = quantize_annotate_model(tf.keras.Sequential([
quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope`
# as well as the custom Keras layer.
with quantize_scope(
{'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig,
'CustomLayer': CustomLayer}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
Explanation: 사용자 정의 Keras 레이어 양자화하기
이 예제에서는 DefaultDenseQuantizeConfig를 사용하여 CustomLayer를 양자화합니다.
구성 적용은 "양자화 실험하기" 사용 사례에서와 같습니다.
CustomLayer에 tfmot.quantization.keras.quantize_annotate_layer를 적용하고 QuantizeConfig를 전달합니다.
tfmot.quantization.keras.quantize_annotate_model을 사용하여 API 기본값으로 나머지 모델을 계속 양자화합니다.
End of explanation
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))]
Explanation: 양자화 매개변수 수정하기
일반적인 실수: 바이어스를 32bit 미만으로 양자화하면 일반적으로 모델 정확성이 너무 많이 손상됩니다.
이 예제에서는 기본 8bit 대신 가중치에 4bit를 사용하도록 Dense 레이어를 수정합니다. 나머지 모델은 계속해서 API 기본값을 사용합니다.
End of explanation
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
Explanation: 구성 적용은 "양자화 실험하기" 사용 사례에서와 같습니다.
Apply tfmot.quantization.keras.quantize_annotate_layer to the Dense layer and pass in the QuantizeConfig.
tfmot.quantization.keras.quantize_annotate_model을 사용하여 API 기본값으로 나머지 모델을 계속 양자화합니다.
End of explanation
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
def get_activations_and_quantizers(self, layer):
# Skip quantizing activations.
return []
def set_quantize_activations(self, layer, quantize_activations):
# Empty since `get_activaations_and_quantizers` returns
# an empty list.
return
Explanation: 양자화할 일부 레이어 수정하기
이 예제에서는 활성화 양자화를 건너뛰도록 Dense 레이어를 수정합니다. 나머지 모델은 계속해서 API 기본값을 사용합니다.
End of explanation
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this Dense layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
Explanation: 구성 적용은 "양자화 실험하기" 사용 사례에서와 같습니다.
Dense 레이어에 tfmot.quantization.keras.quantize_annotate_layer를 적용하고 QuantizeConfig를 전달합니다.
tfmot.quantization.keras.quantize_annotate_model을 사용하여 API 기본값으로 나머지 모델을 계속 양자화합니다.
End of explanation
quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer
quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model
quantize_scope = tfmot.quantization.keras.quantize_scope
class FixedRangeQuantizer(tfmot.quantization.keras.quantizers.Quantizer):
Quantizer which forces outputs to be between -1 and 1.
def build(self, tensor_shape, name, layer):
# Not needed. No new TensorFlow variables needed.
return {}
def __call__(self, inputs, training, weights, **kwargs):
return tf.keras.backend.clip(inputs, -1.0, 1.0)
def get_config(self):
# Not needed. No __init__ parameters to serialize.
return {}
class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig):
# Configure weights to quantize with 4-bit instead of 8-bits.
def get_weights_and_quantizers(self, layer):
# Use custom algorithm defined in `FixedRangeQuantizer` instead of default Quantizer.
return [(layer.kernel, FixedRangeQuantizer())]
Explanation: 사용자 정의 양자화 알고리즘 사용하기
tfmot.quantization.keras.quantizers.Quantizer 클래스는 입력에 모든 알고리즘을 적용할 수 있는 callable입니다.
이 예에서 입력은 가중치이며 FixedRangeQuantizer call 함수의 수학을 가중치에 적용합니다. 원래 가중치 값 대신 FixedRangeQuantizer의 출력이 이제 가중치를 사용하는 모든 항목으로 전달됩니다.
End of explanation
model = quantize_annotate_model(tf.keras.Sequential([
# Pass in modified `QuantizeConfig` to modify this `Dense` layer.
quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()),
tf.keras.layers.Flatten()
]))
# `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`:
with quantize_scope(
{'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}):
# Use `quantize_apply` to actually make the model quantization aware.
quant_aware_model = tfmot.quantization.keras.quantize_apply(model)
quant_aware_model.summary()
Explanation: Applying the configuration is the same across the "Experiment with quantization" use cases.
Dense 레이어에 tfmot.quantization.keras.quantize_annotate_layer를 적용하고 QuantizeConfig를 전달합니다.
Use tfmot.quantization.keras.quantize_annotate_model to continue to quantize the rest of the model with the API defaults.
End of explanation |
582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 2
Imports
Step1: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps
Step2: Integral 1
$$ I_1 = \int_0^a {\sqrt{a^2-x^2} dx} = \frac{\pi a^2}{4} $$
Step3: Integral 2
$$ I_2 = \int_0^{\frac{\pi}{2}} {\sin^2{x}}{ } {dx} = \frac{\pi}{4} $$
Step4: Integral 3
$$ I_3 = \int_0^{2\pi} \frac{dx}{a+b\sin{x}} = {\frac{2\pi}{\sqrt{a^2-b^2}}} $$
Step5: Integral 4
$$ I_4 = \int_0^{\infty} \frac{x}{e^{x}+1} = {\frac{\pi^2}{12}} $$
Step6: Integral 5
$$ I_5 = \int_0^{\infty} \frac{x}{e^{x}-1} = {\frac{\pi^2}{6}} $$ | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
Explanation: Integration Exercise 2
Imports
End of explanation
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps:
Typeset the integral using LateX in a Markdown cell.
Define an integrand function that computes the value of the integrand.
Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral.
Define an integral_exact function that computes the exact value of the integral.
Call and print the return value of integral_approx and integral_exact for one set of parameters.
Here is an example to show what your solutions should look like:
Example
Here is the integral I am performing:
$$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$
End of explanation
def integrand(x, a):
return np.sqrt(a**2 - x**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, a, args=(a,))
return I
def integral_exact(a):
return 0.25*np.pi
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 1
$$ I_1 = \int_0^a {\sqrt{a^2-x^2} dx} = \frac{\pi a^2}{4} $$
End of explanation
def integrand(x):
return np.sin(x)**2
def integral_approx():
I, e = integrate.quad(integrand, 0, np.pi/2)
return I
def integral_exact():
return 0.25*np.pi
print("Numerical: ", integral_approx())
print("Exact : ", integral_exact())
assert True # leave this cell to grade the above integral
Explanation: Integral 2
$$ I_2 = \int_0^{\frac{\pi}{2}} {\sin^2{x}}{ } {dx} = \frac{\pi}{4} $$
End of explanation
def integrand(x,a,b):
return 1/(a+ b*np.sin(x))
def integral_approx(a,b):
I, e = integrate.quad(integrand, 0, 2*np.pi,args=(a,b))
return I
def integral_exact(a,b):
return 2*np.pi/np.sqrt(a**2-b**2)
print("Numerical: ", integral_approx(10,0))
print("Exact : ", integral_exact(10,0))
assert True # leave this cell to grade the above integral
Explanation: Integral 3
$$ I_3 = \int_0^{2\pi} \frac{dx}{a+b\sin{x}} = {\frac{2\pi}{\sqrt{a^2-b^2}}} $$
End of explanation
def integrand(x):
return x/(np.exp(x)+1)
def integral_approx():
I, e = integrate.quad(integrand, 0, np.inf)
return I
def integral_exact():
return (1/12)*np.pi**2
print("Numerical: ", integral_approx())
print("Exact : ", integral_exact())
assert True # leave this cell to grade the above integral
Explanation: Integral 4
$$ I_4 = \int_0^{\infty} \frac{x}{e^{x}+1} = {\frac{\pi^2}{12}} $$
End of explanation
def integrand(x):
return x/(np.exp(x)-1)
def integral_approx():
I, e = integrate.quad(integrand, 0, np.inf)
return I
def integral_exact():
return (1/6)*np.pi**2
print("Numerical: ", integral_approx())
print("Exact : ", integral_exact())
assert True # leave this cell to grade the above integral
Explanation: Integral 5
$$ I_5 = \int_0^{\infty} \frac{x}{e^{x}-1} = {\frac{\pi^2}{6}} $$
End of explanation |
583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transforms and Resampling <a href="https
Step1: Creating and Manipulating Transforms
A number of different spatial transforms are available in SimpleITK.
The simplest is the Identity Transform. This transform simply returns input points unaltered.
Step2: Transform are defined by two sets of parameters, the Parameters and FixedParameters. FixedParameters are not changed during the optimization process when performing registration. For the TranslationTransform, the Parameters are the values of the translation Offset.
Step3: The affine transform is capable of representing translations, rotations, shearing, and scaling.
Step4: A number of other transforms exist to represent non-affine deformations, well-behaved rotation in 3D, etc. See the Transforms tutorial for more information.
Applying Transforms to Images
Create a function to display the images that is aware of image spacing.
Step5: Create a grid image.
Step6: To apply the transform, a resampling operation is required.
Step7: What happened? The translation is positive in both directions. Why does the output image move down and to the left? It important to keep in mind that a transform in a resampling operation defines the transform from the output space to the input space.
Step8: An affine (line preserving) transformation, can perform translation
Step9: or scaling
Step10: or rotation
Step11: or shearing
Step12: Composite Transform
It is possible to compose multiple transform together into a single transform object. With a composite transform, multiple resampling operations are prevented, so interpolation errors are not accumulated. For example, an affine transformation that consists of a translation and rotation,
Step13: can also be represented with two Transform objects applied in sequence with a Composite Transform,
Step14: Beware, tranforms are non-commutative -- order matters!
Step15: Resampling
<img src="resampling.svg"/><br><br>
Resampling as the verb implies is the action of sampling an image, which itself is a sampling of an original continuous signal.
Generally speaking, resampling in SimpleITK involves four components
Step16: Common Errors
It is not uncommon to end up with an empty (all black) image after resampling. This is due to
Step17: Are you puzzled by the result? Is the output just a copy of the input? Add a rotation to the code above and see what happens (euler2d.SetAngle(0.79)).
Resampling at a set of locations
In some cases you may be interested in obtaining the intensity values at a set of points (e.g. coloring the vertices of a mesh model segmented from an image).
The code below generates a random point set in the image and resamples the intensity values at these locations. It is written so that it works for all image-dimensions and types (scalar or vector pixels).
Step18: <font color="red">Homework
Step20: <font color="red">Homework | Python Code:
import SimpleITK as sitk
import numpy as np
%matplotlib inline
import gui
from matplotlib import pyplot as plt
from ipywidgets import interact, fixed
# Utility method that either downloads data from the Girder repository or
# if already downloaded returns the file name for reading from disk (cached data).
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
Explanation: Transforms and Resampling <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F21_Transforms_and_Resampling.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
This notebook explains how to apply transforms to images, and how to perform image resampling.
End of explanation
dimension = 2
print("*Identity Transform*")
identity = sitk.Transform(dimension, sitk.sitkIdentity)
print("Dimension: " + str(identity.GetDimension()))
# Points are always defined in physical space
point = (1.0, 1.0)
def transform_point(transform, point):
transformed_point = transform.TransformPoint(point)
print("Point " + str(point) + " transformed is " + str(transformed_point))
transform_point(identity, point)
Explanation: Creating and Manipulating Transforms
A number of different spatial transforms are available in SimpleITK.
The simplest is the Identity Transform. This transform simply returns input points unaltered.
End of explanation
print("*Translation Transform*")
translation = sitk.TranslationTransform(dimension)
print("Parameters: " + str(translation.GetParameters()))
print("Offset: " + str(translation.GetOffset()))
print("FixedParameters: " + str(translation.GetFixedParameters()))
transform_point(translation, point)
print("")
translation.SetParameters((3.1, 4.4))
print("Parameters: " + str(translation.GetParameters()))
transform_point(translation, point)
Explanation: Transform are defined by two sets of parameters, the Parameters and FixedParameters. FixedParameters are not changed during the optimization process when performing registration. For the TranslationTransform, the Parameters are the values of the translation Offset.
End of explanation
print("*Affine Transform*")
affine = sitk.AffineTransform(dimension)
print("Parameters: " + str(affine.GetParameters()))
print("FixedParameters: " + str(affine.GetFixedParameters()))
transform_point(affine, point)
print("")
affine.SetTranslation((3.1, 4.4))
print("Parameters: " + str(affine.GetParameters()))
transform_point(affine, point)
Explanation: The affine transform is capable of representing translations, rotations, shearing, and scaling.
End of explanation
def myshow(img, title=None, margin=0.05, dpi=80):
nda = sitk.GetArrayViewFromImage(img)
spacing = img.GetSpacing()
ysize = nda.shape[0]
xsize = nda.shape[1]
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
fig = plt.figure(title, figsize=figsize, dpi=dpi)
ax = fig.add_axes([margin, margin, 1 - 2 * margin, 1 - 2 * margin])
extent = (0, xsize * spacing[1], 0, ysize * spacing[0])
t = ax.imshow(
nda, extent=extent, interpolation="hamming", cmap="gray", origin="lower"
)
if title:
plt.title(title)
Explanation: A number of other transforms exist to represent non-affine deformations, well-behaved rotation in 3D, etc. See the Transforms tutorial for more information.
Applying Transforms to Images
Create a function to display the images that is aware of image spacing.
End of explanation
grid = sitk.GridSource(
outputPixelType=sitk.sitkUInt16,
size=(250, 250),
sigma=(0.5, 0.5),
gridSpacing=(5.0, 5.0),
gridOffset=(0.0, 0.0),
spacing=(0.2, 0.2),
)
myshow(grid, "Grid Input")
Explanation: Create a grid image.
End of explanation
def resample(image, transform):
# Output image Origin, Spacing, Size, Direction are taken from the reference
# image in this call to Resample
reference_image = image
interpolator = sitk.sitkCosineWindowedSinc
default_value = 100.0
return sitk.Resample(image, reference_image, transform, interpolator, default_value)
translation.SetOffset((3.1, 4.6))
transform_point(translation, point)
resampled = resample(grid, translation)
myshow(resampled, "Resampled Translation")
Explanation: To apply the transform, a resampling operation is required.
End of explanation
translation.SetOffset(-1 * np.array(translation.GetParameters()))
transform_point(translation, point)
resampled = resample(grid, translation)
myshow(resampled, "Inverse Resampled")
Explanation: What happened? The translation is positive in both directions. Why does the output image move down and to the left? It important to keep in mind that a transform in a resampling operation defines the transform from the output space to the input space.
End of explanation
def affine_translate(transform, x_translation=3.1, y_translation=4.6):
new_transform = sitk.AffineTransform(transform)
new_transform.SetTranslation((x_translation, y_translation))
resampled = resample(grid, new_transform)
myshow(resampled, "Translated")
return new_transform
affine = sitk.AffineTransform(dimension)
interact(
affine_translate,
transform=fixed(affine),
x_translation=(-5.0, 5.0),
y_translation=(-5.0, 5.0),
);
Explanation: An affine (line preserving) transformation, can perform translation:
End of explanation
def affine_scale(transform, x_scale=3.0, y_scale=0.7):
new_transform = sitk.AffineTransform(transform)
matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension))
matrix[0, 0] = x_scale
matrix[1, 1] = y_scale
new_transform.SetMatrix(matrix.ravel())
resampled = resample(grid, new_transform)
myshow(resampled, "Scaled")
print(matrix)
return new_transform
affine = sitk.AffineTransform(dimension)
interact(affine_scale, transform=fixed(affine), x_scale=(0.2, 5.0), y_scale=(0.2, 5.0));
Explanation: or scaling:
End of explanation
def affine_rotate(transform, degrees=15.0):
parameters = np.array(transform.GetParameters())
new_transform = sitk.AffineTransform(transform)
matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension))
radians = -np.pi * degrees / 180.0
rotation = np.array(
[[np.cos(radians), -np.sin(radians)], [np.sin(radians), np.cos(radians)]]
)
new_matrix = np.dot(rotation, matrix)
new_transform.SetMatrix(new_matrix.ravel())
resampled = resample(grid, new_transform)
print(new_matrix)
myshow(resampled, "Rotated")
return new_transform
affine = sitk.AffineTransform(dimension)
interact(affine_rotate, transform=fixed(affine), degrees=(-90.0, 90.0));
Explanation: or rotation:
End of explanation
def affine_shear(transform, x_shear=0.3, y_shear=0.1):
new_transform = sitk.AffineTransform(transform)
matrix = np.array(transform.GetMatrix()).reshape((dimension, dimension))
matrix[0, 1] = -x_shear
matrix[1, 0] = -y_shear
new_transform.SetMatrix(matrix.ravel())
resampled = resample(grid, new_transform)
myshow(resampled, "Sheared")
print(matrix)
return new_transform
affine = sitk.AffineTransform(dimension)
interact(affine_shear, transform=fixed(affine), x_shear=(0.1, 2.0), y_shear=(0.1, 2.0));
Explanation: or shearing:
End of explanation
translate = (8.0, 16.0)
rotate = 20.0
affine = sitk.AffineTransform(dimension)
affine = affine_translate(affine, translate[0], translate[1])
affine = affine_rotate(affine, rotate)
resampled = resample(grid, affine)
myshow(resampled, "Single Transform")
Explanation: Composite Transform
It is possible to compose multiple transform together into a single transform object. With a composite transform, multiple resampling operations are prevented, so interpolation errors are not accumulated. For example, an affine transformation that consists of a translation and rotation,
End of explanation
composite = sitk.CompositeTransform(dimension)
translation = sitk.TranslationTransform(dimension)
translation.SetOffset(-1 * np.array(translate))
composite.AddTransform(translation)
affine = sitk.AffineTransform(dimension)
affine = affine_rotate(affine, rotate)
composite.AddTransform(translation)
composite = sitk.CompositeTransform(dimension)
composite.AddTransform(affine)
resampled = resample(grid, composite)
myshow(resampled, "Two Transforms")
Explanation: can also be represented with two Transform objects applied in sequence with a Composite Transform,
End of explanation
composite = sitk.CompositeTransform(dimension)
composite.AddTransform(affine)
composite.AddTransform(translation)
resampled = resample(grid, composite)
myshow(resampled, "Composite transform in reverse order")
Explanation: Beware, tranforms are non-commutative -- order matters!
End of explanation
def resample_display(image, euler2d_transform, tx, ty, theta):
euler2d_transform.SetTranslation((tx, ty))
euler2d_transform.SetAngle(theta)
resampled_image = sitk.Resample(image, euler2d_transform)
plt.imshow(sitk.GetArrayFromImage(resampled_image))
plt.axis("off")
plt.show()
logo = sitk.ReadImage(fdata("SimpleITK.jpg"))
euler2d = sitk.Euler2DTransform()
# Why do we set the center?
euler2d.SetCenter(
logo.TransformContinuousIndexToPhysicalPoint(np.array(logo.GetSize()) / 2.0)
)
interact(
resample_display,
image=fixed(logo),
euler2d_transform=fixed(euler2d),
tx=(-128.0, 128.0, 2.5),
ty=(-64.0, 64.0),
theta=(-np.pi / 4.0, np.pi / 4.0),
);
Explanation: Resampling
<img src="resampling.svg"/><br><br>
Resampling as the verb implies is the action of sampling an image, which itself is a sampling of an original continuous signal.
Generally speaking, resampling in SimpleITK involves four components:
1. Image - the image we resample, given in coordinate system $m$.
2. Resampling grid - a regular grid of points given in coordinate system $f$ which will be mapped to coordinate system $m$.
2. Transformation $T_f^m$ - maps points from coordinate system $f$ to coordinate system $m$, $^mp = T_f^m(^fp)$.
3. Interpolator - method for obtaining the intensity values at arbitrary points in coordinate system $m$ from the values of the points defined by the Image.
While SimpleITK provides a large number of interpolation methods, the two most commonly used are sitkLinear and sitkNearestNeighbor. The former is used for most interpolation tasks, a compromise between accuracy and computational efficiency. The later is used to interpolate labeled images representing a segmentation, it is the only interpolation approach which will not introduce new labels into the result.
SimpleITK's procedural API provides three methods for performing resampling, with the difference being the way you specify the resampling grid:
Resample(const Image &image1, Transform transform, InterpolatorEnum interpolator, double defaultPixelValue, PixelIDValueEnum outputPixelType)
Resample(const Image &image1, const Image &referenceImage, Transform transform, InterpolatorEnum interpolator, double defaultPixelValue, PixelIDValueEnum outputPixelType)
Resample(const Image &image1, std::vector< uint32_t > size, Transform transform, InterpolatorEnum interpolator, std::vector< double > outputOrigin, std::vector< double > outputSpacing, std::vector< double > outputDirection, double defaultPixelValue, PixelIDValueEnum outputPixelType)
End of explanation
euler2d = sitk.Euler2DTransform()
# Why do we set the center?
euler2d.SetCenter(
logo.TransformContinuousIndexToPhysicalPoint(np.array(logo.GetSize()) / 2.0)
)
tx = 64
ty = 32
euler2d.SetTranslation((tx, ty))
extreme_points = [
logo.TransformIndexToPhysicalPoint((0, 0)),
logo.TransformIndexToPhysicalPoint((logo.GetWidth(), 0)),
logo.TransformIndexToPhysicalPoint((logo.GetWidth(), logo.GetHeight())),
logo.TransformIndexToPhysicalPoint((0, logo.GetHeight())),
]
inv_euler2d = euler2d.GetInverse()
extreme_points_transformed = [inv_euler2d.TransformPoint(pnt) for pnt in extreme_points]
min_x = min(extreme_points_transformed)[0]
min_y = min(extreme_points_transformed, key=lambda p: p[1])[1]
max_x = max(extreme_points_transformed)[0]
max_y = max(extreme_points_transformed, key=lambda p: p[1])[1]
# Use the original spacing (arbitrary decision).
output_spacing = logo.GetSpacing()
# Identity cosine matrix (arbitrary decision).
output_direction = [1.0, 0.0, 0.0, 1.0]
# Minimal x,y coordinates are the new origin.
output_origin = [min_x, min_y]
# Compute grid size based on the physical size and spacing.
output_size = [
int((max_x - min_x) / output_spacing[0]),
int((max_y - min_y) / output_spacing[1]),
]
resampled_image = sitk.Resample(
logo,
output_size,
euler2d,
sitk.sitkLinear,
output_origin,
output_spacing,
output_direction,
)
plt.imshow(sitk.GetArrayViewFromImage(resampled_image))
plt.axis("off")
plt.show()
Explanation: Common Errors
It is not uncommon to end up with an empty (all black) image after resampling. This is due to:
1. Using wrong settings for the resampling grid, not too common, but does happen.
2. Using the inverse of the transformation $T_f^m$. This is a relatively common error, which is readily addressed by invoking the transformations GetInverse method.
Defining the Resampling Grid
In the example above we arbitrarily used the original image grid as the resampling grid. As a result, for many of the transformations the resulting image contained black pixels, pixels which were mapped outside the spatial domain of the original image and a partial view of the original image.
If we want the resulting image to contain all of the original image no matter the transformation, we will need to define the resampling grid using our knowledge of the original image's spatial domain and the inverse of the given transformation.
Computing the bounds of the resampling grid when dealing with an affine transformation is straightforward. An affine transformation preserves convexity with extreme points mapped to extreme points. Thus we only need to apply the inverse transformation to the corners of the original image to obtain the bounds of the resampling grid.
Computing the bounds of the resampling grid when dealing with a BSplineTransform or DisplacementFieldTransform is more involved as we are not guaranteed that extreme points are mapped to extreme points. This requires that we apply the inverse transformation to all points in the original image to obtain the bounds of the resampling grid.
End of explanation
img = logo
# Generate random samples inside the image, we will obtain the intensity/color values at these points.
num_samples = 10
physical_points = []
for pnt in zip(*[list(np.random.random(num_samples) * sz) for sz in img.GetSize()]):
physical_points.append(img.TransformContinuousIndexToPhysicalPoint(pnt))
# Create an image of size [num_samples,1...1], actual size is dependent on the image dimensionality. The pixel
# type is irrelevant, as the image is just defining the interpolation grid (sitkUInt8 has minimal memory footprint).
interp_grid_img = sitk.Image(
[num_samples] + [1] * (img.GetDimension() - 1), sitk.sitkUInt8
)
# Define the displacement field transformation, maps the points in the interp_grid_img to the points in the actual
# image.
displacement_img = sitk.Image(
[num_samples] + [1] * (img.GetDimension() - 1),
sitk.sitkVectorFloat64,
img.GetDimension(),
)
for i, pnt in enumerate(physical_points):
displacement_img[[i] + [0] * (img.GetDimension() - 1)] = np.array(pnt) - np.array(
interp_grid_img.TransformIndexToPhysicalPoint(
[i] + [0] * (img.GetDimension() - 1)
)
)
# Actually perform the resampling. The only relevant choice here is the interpolator. The default_output_pixel_value
# is set to 0.0, but the resampling should never use it because we expect all points to be inside the image and this
# value is only used if the point is outside the image extent.
interpolator_enum = sitk.sitkLinear
default_output_pixel_value = 0.0
output_pixel_type = (
sitk.sitkFloat32
if img.GetNumberOfComponentsPerPixel() == 1
else sitk.sitkVectorFloat32
)
resampled_points = sitk.Resample(
img,
interp_grid_img,
sitk.DisplacementFieldTransform(displacement_img),
interpolator_enum,
default_output_pixel_value,
output_pixel_type,
)
# Print the interpolated values per point
for i in range(resampled_points.GetWidth()):
print(
str(physical_points[i])
+ ": "
+ str(resampled_points[[i] + [0] * (img.GetDimension() - 1)])
+ "\n"
)
Explanation: Are you puzzled by the result? Is the output just a copy of the input? Add a rotation to the code above and see what happens (euler2d.SetAngle(0.79)).
Resampling at a set of locations
In some cases you may be interested in obtaining the intensity values at a set of points (e.g. coloring the vertices of a mesh model segmented from an image).
The code below generates a random point set in the image and resamples the intensity values at these locations. It is written so that it works for all image-dimensions and types (scalar or vector pixels).
End of explanation
file_names = ["cxr.dcm", "photo.dcm", "POPI/meta/00-P.mhd", "training_001_ct.mha"]
images = []
image_file_reader = sitk.ImageFileReader()
for fname in file_names:
image_file_reader.SetFileName(fdata(fname))
image_file_reader.ReadImageInformation()
image_size = list(image_file_reader.GetSize())
# 2D image posing as a 3D one
if len(image_size) == 3 and image_size[2] == 1:
image_size[2] = 0
image_file_reader.SetExtractSize(image_size)
images.append(image_file_reader.Execute())
# 2D image
elif len(image_size) == 2:
images.append(image_file_reader.Execute())
# 3D image grab middle x-z slice
elif len(image_size) == 3:
start_index = [0, image_size[1] // 2, 0]
image_size[1] = 0
image_file_reader.SetExtractSize(image_size)
image_file_reader.SetExtractIndex(start_index)
images.append(image_file_reader.Execute())
# 4/5D image
else:
raise ValueError(f"{image.GetDimension()}D image not supported.")
# Notice that in the display the coronal slices are flipped. As we are
# using matplotlib for display, it is not aware of radiological conventions
# and treats the image as an isotropic array of pixels.
gui.multi_image_display2D(images);
Explanation: <font color="red">Homework:</font> creating a color mesh
You will now use the code for resampling at arbitrary locations to create a colored mesh.
Using the color image of the visible human head [img = sitk.ReadImage(fdata('vm_head_rgb.mha'))]:
1. Implement the marching cubes algorithm to obtain the set of triangles corresponding to the iso-surface of structures of interest (skin, white matter,...).
2. Find the color associated with each of the triangle vertices using the code above.
3. Save the data using the ASCII version of the PLY, Polygon File Format (a.k.a. Stanford Triangle Format).
4. Use meshlab to view your creation.
Creating thumbnails - changing image size, spacing and intensity range
As bio-medical images are most often an-isotropic, have a non uniform size (number of pixels), with a high dynamic range of intensities, some caution is required when converting them to an arbitrary desired size with isotropic spacing and the more common low dynamic intensity range.
The code in the following cells illustrates how to take an arbitrary set of images with various sizes, spacings and intensities and resize all of them to a common arbitrary size, isotropic spacing, and low dynamic intensity range.
End of explanation
def resize_and_scale_uint8(image, new_size, outside_pixel_value=0):
Resize the given image to the given size, with isotropic pixel spacing
and scale the intensities to [0,255].
Resizing retains the original aspect ratio, with the original image centered
in the new image. Padding is added outside the original image extent using the
provided value.
:param image: A SimpleITK image.
:param new_size: List of ints specifying the new image size.
:param outside_pixel_value: Value in [0,255] used for padding.
:return: a 2D SimpleITK image with desired size and a pixel type of sitkUInt8
# Rescale intensities if scalar image with pixel type that isn't sitkUInt8.
# We rescale first, so that the zero padding makes sense for all original image
# ranges. If we resized first, a value of zero in a high dynamic range image may
# be somewhere in the middle of the intensity range and the outer border has a
# constant but arbitrary value.
if (
image.GetNumberOfComponentsPerPixel() == 1
and image.GetPixelID() != sitk.sitkUInt8
):
final_image = sitk.Cast(sitk.RescaleIntensity(image), sitk.sitkUInt8)
else:
final_image = image
new_spacing = [
((osz - 1) * ospc) / (nsz - 1)
for ospc, osz, nsz in zip(
final_image.GetSpacing(), final_image.GetSize(), new_size
)
]
new_spacing = [max(new_spacing)] * final_image.GetDimension()
center = final_image.TransformContinuousIndexToPhysicalPoint(
[sz / 2.0 for sz in final_image.GetSize()]
)
new_origin = [
c - c_index * nspc
for c, c_index, nspc in zip(center, [sz / 2.0 for sz in new_size], new_spacing)
]
final_image = sitk.Resample(
final_image,
size=new_size,
outputOrigin=new_origin,
outputSpacing=new_spacing,
defaultPixelValue=outside_pixel_value,
)
return final_image
# Select the arbitrary new size
new_size = [128, 128]
resized_images = [resize_and_scale_uint8(image, new_size, 50) for image in images]
gui.multi_image_display2D(resized_images);
Explanation: <font color="red">Homework:</font> Why do some of the images displayed above look different from others?
What are the differences between the various images in the images list? Write code to query them and check their intensity ranges, sizes and spacings.
The next cell illustrates how to resize all images to an arbitrary size, using isotropic spacing while maintaining the original aspect ratio.
End of explanation |
584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Acquire the Data
"Data is the new oil"
Ways to acquire data (typical data source)
Download from an internal system
Obtained from client, or other 3rd party
Extracted from a web-based API
Scraped from a website
Extracted from a PDF file
Gathered manually and recorded
Data Formats
- Flat files (e.g. csv)
- Excel files
- Database (e.g. MySQL)
- JSON
- HDFS (Hadoop)
Two Datasets
- Price of Weed in US
- Demographic data by US State
1.1 - Crowdsource the Price of Weed dataset
The Price of Weed website - http
Step1: 1.6 Viewing the Data
Step2: 1.7 Slicing columns using pandas
Step3: Exercise
1) Load the Demographics_State.csv dataset
2) Show the five first rows of the dataset
3) Select the column with the State name in the data frame
4) Get help
5) Change index to date
6) Get all the data for 2nd January 2014
Thinking in Vectors
Difference between loops and vectors
Step4: Exercise | Python Code:
# Load the libraries
import pandas as pd
import numpy as np
# Load the dataset
df = pd.read_csv("data/Weed_Price.csv")
# Shape of the dateset - rows & columns
df.shape
# Check for type of each variable
df.dtypes
# Lets load this again with date as date type
df = pd.read_csv("data/Weed_Price.csv", parse_dates=[-1])
# Now check for type for each row
df.dtypes
# Get the names of all columns
df.columns
# Get the index of all rows
df.index
Explanation: 1. Acquire the Data
"Data is the new oil"
Ways to acquire data (typical data source)
Download from an internal system
Obtained from client, or other 3rd party
Extracted from a web-based API
Scraped from a website
Extracted from a PDF file
Gathered manually and recorded
Data Formats
- Flat files (e.g. csv)
- Excel files
- Database (e.g. MySQL)
- JSON
- HDFS (Hadoop)
Two Datasets
- Price of Weed in US
- Demographic data by US State
1.1 - Crowdsource the Price of Weed dataset
The Price of Weed website - http://www.priceofweed.com/
Crowdsources the price paid by people on the street to get weed. Self Reported.
- Location is auto detected or can be choosen
- Quality is classified in three categories
- High
- Medium
- Low
- Price by weight
- an ounce
- a half ounce
- a quarter
- an eighth
- 10 grams
- 5 grams
- 1 gram
- Strain (though not showed in the dataset)
Reported at individual transaction level
Here is a sample data set from United States - http://www.priceofweed.com/prices/United-States.html
See note - Averages are corrected for outliers based on standard deviation from the mean.
1.2 Scrape the data
Frank Bi from The Verge wrote a script to scrape the data daily. The daily prices are available on github at https://github.com/frankbi/price-of-weed
Here is sample data from one day - 23rd July 2015 - https://github.com/frankbi/price-of-weed/blob/master/data/weedprices23072015.csv
1.3 Combine the data
All the csv files for each day were combined into one large csv. Done by YHAT.
http://blog.yhathq.com/posts/7-funny-datasets.html
1.4 Key Questions / Assumptions
Data is an abstraction of the reality.
What assumptions have been in this entire data collections process?
Are we aware of the assumptions in this process?
How to ensure that the data is accurate or representative for the question we are trying to answer?
1.5 Loading the Data
End of explanation
# Can we see some sample rows - the top 5 rows
df.head()
# Can we see some sample rows - the bottom 5 rows
df.tail()
# Get specific rows
df[20:25]
# Can we access a specific columns
df["State"]
# Using the dot notation
df.State
# Selecting specific column and rows
df[0:5]["State"]
# Works both ways
df["State"][0:5]
#Getting unique values of State
pd.unique(df['State'])
Explanation: 1.6 Viewing the Data
End of explanation
df.index
df.loc[0]
df.iloc[0,0]
df.ix[0,0]
Explanation: 1.7 Slicing columns using pandas
End of explanation
#Find weighted average price with respective weights of 0.6, 0.4 for HighQ and MedQ
#Python approach. Loop over all rows.
#For each row, multiply the respective columns by those weights.
#Add the output to an array
#It is easy to convert pandas series to numpy array.
highq_np = np.array(df.HighQ)
medq_np = np.array(df.MedQ)
#Standard pythonic code
def find_weighted_price():
global weighted_price
weighted_price = []
for i in range(df.shape[0]):
weighted_price.append(0.6*highq_np[i]*0.4*highq_np[i])
#print the weighted price
find_weighted_price()
print weighted_price
Explanation: Exercise
1) Load the Demographics_State.csv dataset
2) Show the five first rows of the dataset
3) Select the column with the State name in the data frame
4) Get help
5) Change index to date
6) Get all the data for 2nd January 2014
Thinking in Vectors
Difference between loops and vectors
End of explanation
#Vectorized Code
weighted_price_vec = 0.6*highq_np + 0.4*medq_np
Explanation: Exercise: Find the running time of the above program
End of explanation |
585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Point Charge Dynamics
Akiva Lipshitz, February 2, 2017
Particles and their dynamics are incredibly fascinating, even wondrous. Give me some particles and some simple equations describing their interactions – some very interesting things can start happening.
Currently studying electrostatics in my physics class, I am interested in not only the static force and field distributions but also in the dynamics of particles in such fields. To study the dynamics of electric particles is not an easy endeavor – in fact the differential equations governing their dynamics are quite complex and not easily solved manually, especially by someone who lacks a background in differential equations.
Instead of relying on our analytical abilities, we may rely on our computational abilities and numerically solve the differential equations. Herein I will develop a scheme for computing the dynamics of $n$ electric particles en masse. It will not be computationally easy – the number of operations grows proportionally to $n^2$. For less than $10^4$ you should be able to simulate the particle dynamics for long enough time intervals to be useful. But for something like $10^6$ particles the problem is intractable. You'll need to do more than $10^12$ operations per iteration and a degree in numerical analysis.
Governing Equations
Given $n$ charges $q_1, q_2, ..., q_n$, with masses $m_1, m_2, ..., m_n$ located at positions $\vec{r}_1, \vec{r_2}, ..., \vec{r}_n$, the force induced on $q_i$ by $q_j$ is given by
$$\vec{F}{j \to i} = k\frac{q_iq_j}{\left|\vec{r}_i-\vec{r}_j\right|^2}\hat{r}{ij}$$
where
$$\hat{r}_{ij} = \vec{r}_i-\vec{r}_j$$
Now, the net marginal force on particle $q_i$ is given as the sum of the pairwise forces
$$\vec{F}{N, i} = \sum{j \ne i}{\vec{F}_{j \to i}}$$
And then the net acceleration of particle $q_i$ just normalizes the force by the mass of the particle
Step2: Let's define our time intervals, so that odeint knows which time stamps to iterate over.
Step3: Some other constants
Step4: We get to choose the initial positions and velocities of our particles. For our initial tests, we'll set up 3 particles to collide with eachother.
Step5: And pack them into an initial state variable we can pass to odeint.
Step6: The Fun Part – Doing the Integration
Now, we'll actually do the integration | Python Code:
import numpy as np
import numpy.ma as ma
from scipy.integrate import odeint
mag = lambda r: np.sqrt(np.sum(np.power(r, 2)))
def g(y, t, q, m, n,d, k):
n: the number of particles
d: the number of dimensions
(for fun's sake I want this
to work for k-dimensional systems)
y: an (n*2,d) dimensional matrix
where y[:n]_i is the position
of the ith particle and
y[n:]_i is the velocity of
the ith particle
qs: the particle charges
ms: the particle masses
k: the electric constant
t: the current timestamp
# r1, r2, dr1dt, dr2dt = np.copy(y.reshape((n*2,d)))
# F = -1./mag(r2-r1)**2
# dy = [
# dr1dt,
# dr2dt,
# (F)*(r1-r2),
# (F)*(r2-r1),
# ]
y = np.copy(y.reshape((n*2,d)))
# rj across, ri down
rs_from = np.tile(y[:n], (n,1,1))
# ri across, rj down
rs_to = np.transpose(rs_from, axes=(1,0,2))
# directional distance between each r_i and r_j
# dr_ij is the force from j onto i, i.e. r_i - r_j
dr = rs_to - rs_from
# Used as a mask
nd_identity = np.eye(n).reshape((n,n,1))
# Force magnitudes
drmag = ma.array(
np.sqrt(
np.sum(
np.power(dr, 2), 2)),
mask=nd_identity)
# Pairwise q_i*q_j for force equation
qsa = np.tile(q, (n,1))
qsb = np.tile(q, (n,1)).T
qs = qsa*qsb
# Directional forces
Fs = (-qs/np.power(drmag,2)).reshape((n,n,1))
# Dividing by m to obtain acceleration vectors
a = np.sum(Fs*dr, 1)
# Sliding integrated acceleration
# (i.e. velocity from previous iteration)
# to the position derivative slot
y[:n] = np.copy(y[n:])
# Entering the acceleration into the velocity slot
y[n:] = np.copy(a)
# Flattening it out for scipy.odeint to work
return np.array(y).reshape(n*2*d)
Explanation: Point Charge Dynamics
Akiva Lipshitz, February 2, 2017
Particles and their dynamics are incredibly fascinating, even wondrous. Give me some particles and some simple equations describing their interactions – some very interesting things can start happening.
Currently studying electrostatics in my physics class, I am interested in not only the static force and field distributions but also in the dynamics of particles in such fields. To study the dynamics of electric particles is not an easy endeavor – in fact the differential equations governing their dynamics are quite complex and not easily solved manually, especially by someone who lacks a background in differential equations.
Instead of relying on our analytical abilities, we may rely on our computational abilities and numerically solve the differential equations. Herein I will develop a scheme for computing the dynamics of $n$ electric particles en masse. It will not be computationally easy – the number of operations grows proportionally to $n^2$. For less than $10^4$ you should be able to simulate the particle dynamics for long enough time intervals to be useful. But for something like $10^6$ particles the problem is intractable. You'll need to do more than $10^12$ operations per iteration and a degree in numerical analysis.
Governing Equations
Given $n$ charges $q_1, q_2, ..., q_n$, with masses $m_1, m_2, ..., m_n$ located at positions $\vec{r}_1, \vec{r_2}, ..., \vec{r}_n$, the force induced on $q_i$ by $q_j$ is given by
$$\vec{F}{j \to i} = k\frac{q_iq_j}{\left|\vec{r}_i-\vec{r}_j\right|^2}\hat{r}{ij}$$
where
$$\hat{r}_{ij} = \vec{r}_i-\vec{r}_j$$
Now, the net marginal force on particle $q_i$ is given as the sum of the pairwise forces
$$\vec{F}{N, i} = \sum{j \ne i}{\vec{F}_{j \to i}}$$
And then the net acceleration of particle $q_i$ just normalizes the force by the mass of the particle:
$$\vec{a}i = \frac{\vec{F}{N, i}}{m_i}$$
To implement this at scale, we're going to need to figure out a scheme for vectorizing all these operations, demonstrated below.
We'll be using scipy.integrate.odeint for our numerical integration. Below, the function g(y, t, q, m, n, d, k) is a function that returns the derivatives for all our variables at each iteration. We pass it to odeint and then do the integration.
End of explanation
t_f = 10000
t = np.linspace(0, 10, num=t_f)
Explanation: Let's define our time intervals, so that odeint knows which time stamps to iterate over.
End of explanation
# Number of dimensions
d = 2
# Number of point charges
n = 3
# charge magnitudes, currently all equal to 1
q = np.array([-10,0.2,-5])
# masses
m = np.ones(n)
# The electric constant
# k=1/(4*pi*epsilon_naught)
# Right now we will set it to 1
# because for our tests we are choosing all q_i =1.
# Therefore, k*q is too large a number and causes
# roundoff errors in the integrator.
# In truth:
# k = 8.99*10**9
# But for now:
k=1.
Explanation: Some other constants
End of explanation
r1i = np.array([-2., 0.0])
dr1dti = np.array([3.,0.])
r2i = np.array([20.,0.5])
dr2dti = np.array([-3., 0.])
r3i = np.array([11.,20])
dr3dti = np.array([0, -3.])
Explanation: We get to choose the initial positions and velocities of our particles. For our initial tests, we'll set up 3 particles to collide with eachother.
End of explanation
y0 = np.array([r1i, r2i, r3i, dr1dti, dr2dti, dr3dti]).reshape(n*2*d)
Explanation: And pack them into an initial state variable we can pass to odeint.
End of explanation
# Doing the integration
yf = odeint(g, y0, t, args=(q,m,n,d,k)).reshape(t_f,n*2,d)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
fig = plt.figure(figsize=(20,20))
#ax = fig.add_subplot(111, projection='3d')
ax = fig.add_subplot(111)
ys1 = yf[:,0,1]
xs1 = yf[:,0,0]
xs2 = yf[:,1,0]
ys2 = yf[:,1,1]
xs3 = yf[:,2,0]
ys3 = yf[:,2,1]
ax.plot(xs1[1], ys1[1],'bv')
ax.plot(xs1[-1], ys1[-1], 'rv')
ax.plot(xs2[:1], ys2[:1], 'bv')
ax.plot(xs2[-1:], ys2[-1:], 'rv')
ax.plot(xs3[:1], ys3[:1], 'bv')
ax.plot(xs3[-1:], ys3[-1:], 'rv')
#
# minx = np.min(y[:,[0,2],0])
# maxx = np.max(y[:,[0,2],0])
# miny = np.min(y[:,[0,2],1])
# maxy = np.max(y[:,[0,2],1])
ax.plot(xs1, ys1)
ax.plot(xs2, ys2)
ax.plot(xs3, ys3)
# plt.xlim(xmin=minx, xmax=maxx)
# plt.ylim(ymin=miny, ymax=maxy)
plt.title("Paths of 3 Colliding Electric Particles")
plt.ion()
plt.show()
Explanation: The Fun Part – Doing the Integration
Now, we'll actually do the integration
End of explanation |
586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing the trained weight matrices (not in an ensemble)
Step1: Load the weight matrices from the training
Step2: Visualize the digit from one hot representation through the activity weight matrix to the image representation
- Image is average digit from mnist dataset
Step3: Visualize the rotation of the image using the weight matrix from activity to activity
- does not use the weight matrix used on the recurrent connection | Python Code:
import nengo
import numpy as np
import cPickle
import matplotlib.pyplot as plt
from matplotlib import pylab
import matplotlib.animation as animation
Explanation: Testing the trained weight matrices (not in an ensemble)
End of explanation
#Weight matrices generated by the neural network after training
#Maps the label vectors to the neuron activity of the ensemble
label_weights = cPickle.load(open("label_weights1000.p", "rb"))
#Maps the activity of the neurons to the visual representation of the image
activity_to_img_weights = cPickle.load(open("activity_to_img_weights_scale1000.p", "rb"))
#Maps the activity of the neurons of an image with the activity of the neurons of an image scaled
scale_up_weights = cPickle.load(open("scale_up_weights1000.p", "rb"))
scale_down_weights = cPickle.load(open("scale_down_weights1000.p", "rb"))
#Create the pointers for the numbers
temp = np.diag([1]*10)
ZERO = temp[0]
ONE = temp[1]
TWO = temp[2]
THREE= temp[3]
FOUR = temp[4]
FIVE = temp[5]
SIX = temp[6]
SEVEN =temp[7]
EIGHT= temp[8]
NINE = temp[9]
labels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE]
#Visualize the one hot representation
print(ZERO)
print(ONE)
Explanation: Load the weight matrices from the training
End of explanation
#Change this to imagine different digits
imagine = ZERO
#Can also imagine combitnations of numbers (ZERO + ONE)
#Label to activity
test_activity = np.dot(imagine,label_weights)
#Image decoded
test_output_img = np.dot(test_activity, activity_to_img_weights)
plt.imshow(test_output_img.reshape(28,28),cmap='gray')
plt.show()
Explanation: Visualize the digit from one hot representation through the activity weight matrix to the image representation
- Image is average digit from mnist dataset
End of explanation
#Change this to visualize different digits
imagine = ZERO
#How long the animation should go for
frames=5
#Make a list of the activation of rotated images and add first frame
rot_seq = []
rot_seq.append(np.dot(imagine,label_weights)) #Map the label vector to the activity vector
test_output_img = np.dot(rot_seq[0], activity_to_img_weights) #Map the activity to the visual representation
#add the rest of the frames, using the previous frame to calculate the current frame
for i in range(1,frames):
rot_seq.append(np.dot(rot_seq[i-1],scale_down_weights)) #add the activity of the current image to the list
test_output_img = np.dot(rot_seq[i], activity_to_img_weights) #map the new activity to the visual image
for i in range(1,frames*2):
rot_seq.append(np.dot(rot_seq[frames+i-2],scale_up_weights)) #add the activity of the current image to the list
test_output_img = np.dot(rot_seq[i], activity_to_img_weights) #map the new activity to the visual image
#Animation of rotation
fig = plt.figure()
def updatefig(i):
image_vector = np.dot(rot_seq[i], activity_to_img_weights) #map the activity to the image representation
im = pylab.imshow(np.reshape(image_vector,(28,28), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True)
return im,
ani = animation.FuncAnimation(fig, updatefig, interval=100, blit=True)
plt.show()
Explanation: Visualize the rotation of the image using the weight matrix from activity to activity
- does not use the weight matrix used on the recurrent connection
End of explanation |
587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Archimedes and Pi
by Paulo Marques, 2014/03/09 (Adapted in 2018/10/15 to Python from Julia)
Since high school I've been fascinated with $\pi$ -- this infinite non-repeating irrational transcendent number. In fact, not only was I fascinated with $\pi$ but I was fascinated about everything related to it. In 11th grade I asked my math teacher about how to deduce the area of a circle. Her answer
Step1: Our function aprox_pi() will compute the aproximation of $\pi$ for a $2^n$ polygon
Step2: And the result is
Step3: So, in 10 iterations we got a very good approximation of $\pi$ using a 1024-sided polygon. (Note that since $P_\infty \rightarrow 2 \pi$ we need to divide the final result by two. aprox_pi() is automatically doing so.)
The error of the result is
Step4: That's Interesting. Let's see how good are the approximations generated by aprox_pi().
Step5: So, what's going on? We should expect better approximations as the number of sides increases, right? But, the best result we get is with 14 iterations and a polygon of 16384 sides. After that the approximations of $\pi$ get worse.
The problem is that our algorithm is not very good in terms of producing the end result. If you look at the expression $P_n = 2^n \times s_n$ what we are doing is multiplying a very large number (the number of sides) by a very small number (the length of a side). After a certain point, because we are using floating point, this is a recipe for disaster. In particular, for a 16384-sided polygon, the length of a side is approximatly | Python Code:
from math import sqrt, pi
def side_next(side):
return sqrt(2. - sqrt(4. - side**2.0))
Explanation: Archimedes and Pi
by Paulo Marques, 2014/03/09 (Adapted in 2018/10/15 to Python from Julia)
Since high school I've been fascinated with $\pi$ -- this infinite non-repeating irrational transcendent number. In fact, not only was I fascinated with $\pi$ but I was fascinated about everything related to it. In 11th grade I asked my math teacher about how to deduce the area of a circle. Her answer: for that you need to learn how to integrate – wait until university. But I couldn't wait and head to the library where I found a single book that talked about the subject – an obscure calculus book by an author named Piskunov. And, to integrate I've learned – just because of $\pi$. But I digress. ..
This story is not about calculus or "symbolic" integration. It's about how Archimedes calculated $\pi$ circa 200 B.C. In the "Measurement of a Circle" Archimedes states that:
"The ratio of the circumference of any circle to its diameter is greater than $3\tfrac{10}{71}$ but less than $3\tfrac{1}{7}$"
This is the first really accurate estimation of $\pi$. I.e., he calcuated $3.140845070422535 < \pi < 3.142857142857143$. A good approximation of $\pi$ is 3.141592653589793. So, this is two decimal places correct. That's pretty impressive.
According to the story, Archimedes did this by inscribing and circumscribing a circle with regular polygons and measuring their perimeter. As the number of sides increases, the better these polygons approximate a circle. In the end Archimedes was using a 96-sided polygon. The next image illustrates the idea.
One of the annoying things when books talk about this is that they always show this nice picture but never ever do the actual calculation. So, using Python how can we do this?
Let's start by assuming that we are going to use a circle with a radius of 1 and we inscribe a square in it. (The square's side is going to be $\sqrt{2}$.)
Now, assume that the side of this polygon is $s_n$ and you are going to double the number of sides where the length of each new side is $s_{n+1}$. We can draw several triangles in the figure that will help us out:
If we take the side $\overline{AB}$, which measures $s_n$, and break it in two, we get the triangle $\overline{ACD}$. This triangle has a hypotenuse of $s_{n+1}$, an adjacent side of $s_n/2$ and a height of $h$. Note that the new polygon that we are forming is going to have eight sides (i.e., double the number of sides we had), each one measuring $s_{n+1}$. From this we can write:
$$ h^2 + (\frac{s_n}{2})^2 = s_{n+1}^2 $$
Looking at the triangle $\overline{BCO}$, which is rectangle, we note that: its hypotenuse is 1, one side measures $1-h$ and the other measures $s_n/2$. Thus, we can write:
$$ (1-h)^2 + (\frac{s_n}{2})^2 = 1^2 $$
These two relations will always apply as we contantly break the polygons into smaller polygons. As we progress, the perimeter of the polygon $P_n$, obtained after $n$ iterations, will approximate the perimeter of the circle, measuring $2 \pi$. What this means is that $\lim_{n \to \infty} P_n = 2 \pi $.
Also note that every time we create a new polygon the number of sides doubles. Thus, after n steps we have a $2^n$ sided polygon and $P_n$ is:
$ P_n = 2^n \times s_n $
Manipulating the two expressions above we get:
$$ s_{n+1} = \sqrt{ 2 - \sqrt{4 - s_n^2} } $$
Since we started with a square we have: $s_2 = \sqrt 2$. We can also consider $s_1 = 2$ representing a diameter line.
So, with this we have all equations needed to iteratively approximate $\pi$. Let's start by coding a function that gives us $s_{n+1}$:
End of explanation
def aprox_pi(n = 10):
s = 2.0
for i in range(1, n):
s = side_next(s)
return 2.0**(n-1) * s
Explanation: Our function aprox_pi() will compute the aproximation of $\pi$ for a $2^n$ polygon:
End of explanation
a_pi = aprox_pi()
a_pi
Explanation: And the result is:
End of explanation
abs(pi - a_pi)
Explanation: So, in 10 iterations we got a very good approximation of $\pi$ using a 1024-sided polygon. (Note that since $P_\infty \rightarrow 2 \pi$ we need to divide the final result by two. aprox_pi() is automatically doing so.)
The error of the result is:
End of explanation
print("%10s \t %10s \t %20s \t %10s" % ("i", "Sides", "Pi", "Error"))
print("===================================================================")
for i in range(1, 31):
sides = 2.**i
a_pi = aprox_pi(i)
err = abs(pi - a_pi)
print("%10d \t %10d \t %20.10f \t %10.2e" % (i, sides, a_pi, err))
Explanation: That's Interesting. Let's see how good are the approximations generated by aprox_pi().
End of explanation
2*pi/16384
Explanation: So, what's going on? We should expect better approximations as the number of sides increases, right? But, the best result we get is with 14 iterations and a polygon of 16384 sides. After that the approximations of $\pi$ get worse.
The problem is that our algorithm is not very good in terms of producing the end result. If you look at the expression $P_n = 2^n \times s_n$ what we are doing is multiplying a very large number (the number of sides) by a very small number (the length of a side). After a certain point, because we are using floating point, this is a recipe for disaster. In particular, for a 16384-sided polygon, the length of a side is approximatly:
End of explanation |
588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning how to move a human arm
In this tutorial we will show how to train a basic biomechanical model using keras-rl.
Installation
To make it work, follow the instructions in
https
Step1: Creating the actor and the critic
The actor serves as a brain for controlling muscles. The critic is our approximation of how good is the brain performing for achieving the goal
Step2: Train the actor and the critic
We will now run keras-rl implementation of the DDPG algorithm which trains both networks.
Step3: Evaluate the results
Check how our trained 'brain' performs. Below we will also load a pretrained model (on the larger number of episodes), which should perform better. It was trained exactly the same way, just with a larger number of steps (parameter nb_steps in agent.fit. | Python Code:
# Derived from keras-rl
import opensim as osim
import numpy as np
import sys
from keras.models import Sequential, Model
from keras.layers import Dense, Activation, Flatten, Input, concatenate
from keras.optimizers import Adam
import numpy as np
from rl.agents import DDPGAgent
from rl.memory import SequentialMemory
from rl.random import OrnsteinUhlenbeckProcess
from osim.env.arm import ArmEnv
from keras.optimizers import RMSprop
import argparse
import math
# Load walking environment
env = ArmEnv(True)
env.reset()
# Total number of steps in training
nallsteps = 10000
nb_actions = env.action_space.shape[0]
Explanation: Learning how to move a human arm
In this tutorial we will show how to train a basic biomechanical model using keras-rl.
Installation
To make it work, follow the instructions in
https://github.com/stanfordnmbl/osim-rl#getting-started
i.e. run
conda create -n opensim-rl -c kidzik opensim git python=2.7
source activate opensim-rl
pip install git+https://github.com/stanfordnmbl/osim-rl.git
Then run
git clone https://github.com/stanfordnmbl/osim-rl.git
conda install keras -c conda-forge
pip install git+https://github.com/matthiasplappert/keras-rl.git
cd osim-rl
conda install jupyter
follow the instructions and once jupyter is installed and type
jupyter notebook
This should open the browser with jupyter. Navigate to this notebook, i.e. to the file scripts/train.arm.ipynb.
Preparing the environment
The following two blocks load necessary libraries and create a simulator environment.
End of explanation
# Create networks for DDPG
# Next, we build a very simple model.
actor = Sequential()
actor.add(Flatten(input_shape=(1,) + env.observation_space.shape))
actor.add(Dense(32))
actor.add(Activation('relu'))
actor.add(Dense(32))
actor.add(Activation('relu'))
actor.add(Dense(32))
actor.add(Activation('relu'))
actor.add(Dense(nb_actions))
actor.add(Activation('sigmoid'))
print(actor.summary())
action_input = Input(shape=(nb_actions,), name='action_input')
observation_input = Input(shape=(1,) + env.observation_space.shape, name='observation_input')
flattened_observation = Flatten()(observation_input)
x = concatenate([action_input, flattened_observation])
x = Dense(64)(x)
x = Activation('relu')(x)
x = Dense(64)(x)
x = Activation('relu')(x)
x = Dense(64)(x)
x = Activation('relu')(x)
x = Dense(1)(x)
x = Activation('linear')(x)
critic = Model(inputs=[action_input, observation_input], outputs=x)
print(critic.summary())
Explanation: Creating the actor and the critic
The actor serves as a brain for controlling muscles. The critic is our approximation of how good is the brain performing for achieving the goal
End of explanation
# Set up the agent for training
memory = SequentialMemory(limit=100000, window_length=1)
random_process = OrnsteinUhlenbeckProcess(theta=.15, mu=0., sigma=.2, size=env.noutput)
agent = DDPGAgent(nb_actions=nb_actions, actor=actor, critic=critic, critic_action_input=action_input,
memory=memory, nb_steps_warmup_critic=100, nb_steps_warmup_actor=100,
random_process=random_process, gamma=.99, target_model_update=1e-3,
delta_clip=1.)
agent.compile(Adam(lr=.001, clipnorm=1.), metrics=['mae'])
# Okay, now it's time to learn something! We visualize the training here for show, but this
# slows down training quite a lot. You can always safely abort the training prematurely using
# Ctrl + C.
agent.fit(env, nb_steps=2000, visualize=False, verbose=0, nb_max_episode_steps=200, log_interval=10000)
# After training is done, we save the final weights.
# agent.save_weights(args.model, overwrite=True)
Explanation: Train the actor and the critic
We will now run keras-rl implementation of the DDPG algorithm which trains both networks.
End of explanation
# agent.load_weights(args.model)
# Finally, evaluate our algorithm for 1 episode.
agent.test(env, nb_episodes=2, visualize=False, nb_max_episode_steps=1000)
agent.load_weights("../models/example.h5f")
# Finally, evaluate our algorithm for 1 episode.
agent.test(env, nb_episodes=5, visualize=False, nb_max_episode_steps=1000)
Explanation: Evaluate the results
Check how our trained 'brain' performs. Below we will also load a pretrained model (on the larger number of episodes), which should perform better. It was trained exactly the same way, just with a larger number of steps (parameter nb_steps in agent.fit.
End of explanation |
589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
数据抓取:
使用Python编写网络爬虫
王成军
[email protected]
计算传播网 http
Step1: 一般的数据抓取,使用urllib2和beautifulsoup配合就可以了。
尤其是对于翻页时url出现规则变化的网页,只需要处理规则化的url就可以了。
以简单的例子是抓取天涯论坛上关于某一个关键词的帖子。
在天涯论坛,关于雾霾的帖子的第一页是:
http
Step2: http
Step3: 抓取天涯论坛PX帖子列表
回帖网络(Thread network)的结构
- 列表
- 主帖
- 回帖
Step4: 抓取作者信息
Step5: http
Step6: http
Step7: 作者:柠檬在追逐 时间:2012-10-28 21
Step8: 如何翻页
http
Step9: 测试
Step10: 正式抓取!
Step11: 读取数据
Step12: 总帖数是多少?
http | Python Code:
import urllib2
from bs4 import BeautifulSoup
Explanation: 数据抓取:
使用Python编写网络爬虫
王成军
[email protected]
计算传播网 http://computational-communication.com
需要解决的问题
页面解析
获取Javascript隐藏源数据
自动翻页
自动登录
连接API接口
End of explanation
from IPython.display import display_html, HTML
HTML('<iframe src=http://bbs.tianya.cn/list.jsp?item=free&nextid=%d&order=8&k=PX width=1000 height=500></iframe>')
# the webpage we would like to crawl
page_num = 0
url = "http://bbs.tianya.cn/list.jsp?item=free&nextid=%d&order=8&k=PX" % page_num
content = urllib2.urlopen(url).read() #获取网页的html文本
soup = BeautifulSoup(content, "lxml")
articles = soup.find_all('tr')
print articles[0]
print articles[1]
len(articles[1:])
Explanation: 一般的数据抓取,使用urllib2和beautifulsoup配合就可以了。
尤其是对于翻页时url出现规则变化的网页,只需要处理规则化的url就可以了。
以简单的例子是抓取天涯论坛上关于某一个关键词的帖子。
在天涯论坛,关于雾霾的帖子的第一页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=0&order=8&k=雾霾
第二页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=1&order=8&k=雾霾
数据抓取:
抓取天涯回帖网络
王成军
[email protected]
计算传播网 http://computational-communication.com
End of explanation
for t in articles[1].find_all('td'): print t
td = articles[1].find_all('td')
print td[0]
print td[0]
print td[0].text
print td[0].text.strip()
print td[0].a['href']
print td[1]
print td[2]
print td[3]
print td[4]
records = []
for i in articles[1:]:
td = i.find_all('td')
title = td[0].text.strip()
title_url = td[0].a['href']
author = td[1].text
author_url = td[1].a['href']
views = td[2].text
replies = td[3].text
date = td[4]['title']
record = title + '\t' + title_url+ '\t' + author + '\t'+ author_url + '\t' + views+ '\t' + replies+ '\t'+ date
records.append(record)
print records[2]
Explanation: http://bbs.tianya.cn/list.jsp?item=free&nextid=0&order=8&k=PX
通过分析帖子列表的源代码,使用inspect方法,会发现所有要解析的内容都在‘td’这个层级下
End of explanation
def crawler(page_num, file_name):
try:
# open the browser
url = "http://bbs.tianya.cn/list.jsp?item=free&nextid=%d&order=8&k=PX" % page_num
content = urllib2.urlopen(url).read() #获取网页的html文本
soup = BeautifulSoup(content, "lxml")
articles = soup.find_all('tr')
# write down info
for i in articles[1:]:
td = i.find_all('td')
title = td[0].text.strip()
title_url = td[0].a['href']
author = td[1].text
author_url = td[1].a['href']
views = td[2].text
replies = td[3].text
date = td[4]['title']
record = title + '\t' + title_url+ '\t' + author + '\t'+ \
author_url + '\t' + views+ '\t' + replies+ '\t'+ date
with open(file_name,'a') as p: # '''Note''':Append mode, run only once!
p.write(record.encode('utf-8')+"\n") ##!!encode here to utf-8 to avoid encoding
except Exception, e:
print e
pass
# crawl all pages
for page_num in range(10):
print (page_num)
crawler(page_num, '/Users/chengjun/bigdata/tianya_bbs_threads_list.txt')
import pandas as pd
df = pd.read_csv('/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_list.txt', sep = "\t", header=None)
df[:2]
len(df)
df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'})
df[:2]
len(df.link)
Explanation: 抓取天涯论坛PX帖子列表
回帖网络(Thread network)的结构
- 列表
- 主帖
- 回帖
End of explanation
df.author_page[:5]
Explanation: 抓取作者信息
End of explanation
# user_info
url = df.author_page[1]
content = urllib2.urlopen(url).read() #获取网页的html文本
soup1 = BeautifulSoup(content, "lxml")
user_info = soup.find('div', {'class', 'userinfo'})('p')
area, nid, freq_use, last_login_time, reg_time = [i.get_text()[4:] for i in user_info]
print area, nid, freq_use, last_login_time, reg_time
link_info = soup1.find_all('div', {'class', 'link-box'})
followed_num, fans_num = [i.a.text for i in link_info]
print followed_num, fans_num
activity = soup1.find_all('span', {'class', 'subtitle'})
post_num, reply_num = [j.text[2:] for i in activity[:1] for j in i('a')]
print post_num, reply_num
print activity[2]
link_info = soup.find_all('div', {'class', 'link-box'})
followed_num, fans_num = [i.a.text for i in link_info]
print followed_num, fans_num
link_info[0].a.text
# user_info = soup.find('div', {'class', 'userinfo'})('p')
# user_infos = [i.get_text()[4:] for i in user_info]
def author_crawler(url, file_name):
try:
content = urllib2.urlopen(url).read() #获取网页的html文本
soup = BeautifulSoup(content, "lxml")
link_info = soup.find_all('div', {'class', 'link-box'})
followed_num, fans_num = [i.a.text for i in link_info]
try:
activity = soup.find_all('span', {'class', 'subtitle'})
post_num, reply_num = [j.text[2:] for i in activity[:1] for j in i('a')]
except:
post_num, reply_num = 1, 0
record = '\t'.join([url, followed_num, fans_num, post_num, reply_num])
with open(file_name,'a') as p: # '''Note''':Append mode, run only once!
p.write(record.encode('utf-8')+"\n") ##!!encode here to utf-8 to avoid encoding
except Exception, e:
print e, url
record = '\t'.join([url, 'na', 'na', 'na', 'na'])
with open(file_name,'a') as p: # '''Note''':Append mode, run only once!
p.write(record.encode('utf-8')+"\n") ##!!encode here to utf-8 to avoid encoding
pass
for k, url in enumerate(df.author_page):
if k % 10==0:
print k
author_crawler(url, '/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_author_info2.txt')
Explanation: http://www.tianya.cn/62237033
http://www.tianya.cn/67896263
End of explanation
df.link[0]
url = 'http://bbs.tianya.cn' + df.link[2]
url
from IPython.display import display_html, HTML
HTML('<iframe src=http://bbs.tianya.cn/post-free-2848797-1.shtml width=1000 height=500></iframe>')
# the webpage we would like to crawl
post = urllib2.urlopen(url).read() #获取网页的html文本
post_soup = BeautifulSoup(post, "lxml")
#articles = soup.find_all('tr')
print (post_soup.prettify())[:5000]
pa = post_soup.find_all('div', {'class', 'atl-item'})
len(pa)
print pa[0]
print pa[1]
print pa[89]
Explanation: http://www.tianya.cn/50499450/follow
还可抓取他们的关注列表和粉丝列表
数据抓取:
使用Python抓取回帖
王成军
[email protected]
计算传播网 http://computational-communication.com
End of explanation
print pa[0].find('div', {'class', 'bbs-content'}).text.strip()
print pa[87].find('div', {'class', 'bbs-content'}).text.strip()
pa[1].a
print pa[0].find('a', class_ = 'reportme a-link')
print pa[0].find('a', class_ = 'reportme a-link')['replytime']
print pa[0].find('a', class_ = 'reportme a-link')['author']
for i in pa[:10]:
p_info = i.find('a', class_ = 'reportme a-link')
p_time = p_info['replytime']
p_author_id = p_info['authorid']
p_author_name = p_info['author']
p_content = i.find('div', {'class', 'bbs-content'}).text.strip()
p_content = p_content.replace('\t', '')
print p_time, '--->', p_author_id, '--->', p_author_name,'--->', p_content, '\n'
Explanation: 作者:柠檬在追逐 时间:2012-10-28 21:33:55
@lice5 2012-10-28 20:37:17
作为宁波人 还是说一句:革命尚未成功 同志仍需努力
-----------------------------
对 现在说成功还太乐观,就怕说一套做一套
作者:lice5 时间:2012-10-28 20:37:17
作为宁波人 还是说一句:革命尚未成功 同志仍需努力
4 /post-free-4242156-1.shtml 2014-04-09 15:55:35 61943225 野渡自渡人 @Y雷政府34楼2014-04-0422:30:34 野渡君雄文!支持是必须的。 ----------------------------- @清坪过客16楼2014-04-0804:09:48 绝对的权力导致绝对的腐败! ----------------------------- @T大漠鱼T35楼2014-04-0810:17:27 @周丕东@普欣@拾月霜寒2012@小摸包@姚文嚼字@四號@凌宸@乔志峰@野渡自渡人@曾兵2010@缠绕夜色@曾颖@风青扬请关注
End of explanation
post_soup.find('div', {'class', 'atl-pages'})#.['onsubmit']
post_pages = post_soup.find('div', {'class', 'atl-pages'})
post_pages = post_pages.form['onsubmit'].split(',')[-1].split(')')[0]
post_pages
#post_soup.select('.atl-pages')[0].select('form')[0].select('onsubmit')
url = 'http://bbs.tianya.cn' + df.link[2]
url_base = ''.join(url.split('-')[:-1]) + '-%d.shtml'
url_base
def parsePage(pa):
records = []
for i in pa:
p_info = i.find('a', class_ = 'reportme a-link')
p_time = p_info['replytime']
p_author_id = p_info['authorid']
p_author_name = p_info['author']
p_content = i.find('div', {'class', 'bbs-content'}).text.strip()
p_content = p_content.replace('\t', '').replace('\n', '')#.replace(' ', '')
record = p_time + '\t' + p_author_id+ '\t' + p_author_name + '\t'+ p_content
records.append(record)
return records
import sys
def flushPrint(s):
sys.stdout.write('\r')
sys.stdout.write('%s' % s)
sys.stdout.flush()
url_1 = 'http://bbs.tianya.cn' + df.link[10]
content = urllib2.urlopen(url_1).read() #获取网页的html文本
post_soup = BeautifulSoup(content, "lxml")
pa = post_soup.find_all('div', {'class', 'atl-item'})
b = post_soup.find('div', class_= 'atl-pages')
b
url_1 = 'http://bbs.tianya.cn' + df.link[0]
content = urllib2.urlopen(url_1).read() #获取网页的html文本
post_soup = BeautifulSoup(content, "lxml")
pa = post_soup.find_all('div', {'class', 'atl-item'})
a = post_soup.find('div', {'class', 'atl-pages'})
a
a.form
if b.form:
print 'true'
else:
print 'false'
import random
import time
def crawler(url, file_name):
try:
# open the browser
url_1 = 'http://bbs.tianya.cn' + url
content = urllib2.urlopen(url_1).read() #获取网页的html文本
post_soup = BeautifulSoup(content, "lxml")
# how many pages in a post
post_form = post_soup.find('div', {'class', 'atl-pages'})
if post_form.form:
post_pages = post_form.form['onsubmit'].split(',')[-1].split(')')[0]
post_pages = int(post_pages)
url_base = '-'.join(url_1.split('-')[:-1]) + '-%d.shtml'
else:
post_pages = 1
# for the first page
pa = post_soup.find_all('div', {'class', 'atl-item'})
records = parsePage(pa)
with open(file_name,'a') as p: # '''Note''':Append mode, run only once!
for record in records:
p.write('1'+ '\t' + url + '\t' + record.encode('utf-8')+"\n")
# for the 2nd+ pages
if post_pages > 1:
for page_num in range(2, post_pages+1):
time.sleep(random.random())
flushPrint(page_num)
url2 =url_base % page_num
content = urllib2.urlopen(url2).read() #获取网页的html文本
post_soup = BeautifulSoup(content, "lxml")
pa = post_soup.find_all('div', {'class', 'atl-item'})
records = parsePage(pa)
with open(file_name,'a') as p: # '''Note''':Append mode, run only once!
for record in records:
p.write(str(page_num) + '\t' +url + '\t' + record.encode('utf-8')+"\n")
else:
pass
except Exception, e:
print e
pass
Explanation: 如何翻页
http://bbs.tianya.cn/post-free-2848797-1.shtml
http://bbs.tianya.cn/post-free-2848797-2.shtml
http://bbs.tianya.cn/post-free-2848797-3.shtml
End of explanation
url = df.link[2]
file_name = '/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_2test.txt'
crawler(url, file_name)
Explanation: 测试
End of explanation
for k, link in enumerate(df.link):
flushPrint(link)
if k % 10== 0:
print 'This it the post of : ' + str(k)
file_name = '/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_network.txt'
crawler(link, file_name)
Explanation: 正式抓取!
End of explanation
dtt = []
with open('/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_network.txt', 'r') as f:
for line in f:
pnum, link, time, author_id, author, content = line.replace('\n', '').split('\t')
dtt.append([pnum, link, time, author_id, author, content])
len(dtt)
dt = pd.DataFrame(dtt)
dt[:5]
dt=dt.rename(columns = {0:'page_num', 1:'link', 2:'time', 3:'author',4:'author_name', 5:'reply'})
dt[:5]
dt.reply[:100]
Explanation: 读取数据
End of explanation
18459/50
Explanation: 总帖数是多少?
http://search.tianya.cn/bbs?q=PX 共有18459 条内容
End of explanation |
590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Power of IPython Notebook + Pandas + and Scikit-learn
IPython Notebook, Numpy, Pandas, MongoDB, R — for the better part of a year now, I have been trying out these technologies as part of Udacity's Data Analyst Nanodegree. My undergrad education barely touched on data visualization or more broadly data science, and so I figured being exposed to the aforementioned technologies would be fun. And fun it has been, with R's powerful IDE-powered data mundging and visualization techniques having been particularly revelatory. I learned enough of R to create some complex visualizations, and was impressed by how easy is to import data into its Dataframe representations and then transform and visualize that data. I also thought RStudio's paradigm of continuously intermixed code editing and execution was superior to my habitual workflow of just endlessly cycling between tweaking and executing of Python scripts.
Still, R is a not-quite-general-purpose-language and I hit upon multiple instances in which simple things were hard to do. In such times, I could not help but miss the powers of Python, a language I have tons of experience with and which is about as general purpose as it gets. Luckily, the courses also covered the equivalent of an R implementation for Python
Step1: But working with this set of dictionaries would not be nearly as fast or easy as a Pandas dataframe, so I soon converted it to that and went ahead and summarized all the features with a single method call
Step2: Looking through these, I found one instance of a valid outlier - Mark A. Frevert (CEO of Enron), and removed him from the dataset.
I should emphasize the benefits of doing all this in IPython Notebook. Being able to tweak parts of the code without reexecuting all of it and reloading all the data made iterating on ideas much faster, and iterating on ideas fast is essential for exploratory data analysis and development of machine learned models. It's no accident that the Matlab IDE and RStudio, both tools commonly used in the sciences for data processing and analysis, have essentially the same structure. I did not understand the benefits of IPython Notebook when I was first made to use it for class assignments in College, but now it has finally dawned on me that it fills the same role as those IDEs and became popular because it is similaly well suited for working with data.
Step3: This result suggested that most features have large outliers (larger than 3 standard deviations). In order to be careful not to remove any useful data, I manually inspected all rows with large outliers to see any values that seem appropriate for removal
Step4: Looking through these, I found one instance of a valid outlier - Mark A. Frevert (CEO of Enron), and removed him from the dataset.
I should emphasize the benefits of doing all this in IPython Notebook. Being able to tweak parts of the code without reexecuting all of it and reloading all the data made iterating on ideas much faster, and iterating on ideas fast is essential for exploratory data analysis and development of machine learned models. It's no accident that the Matlab IDE and RStudio, both tools commonly used in the sciences for data processing and analysis, have essentially the same structure. I did not understand the benefits of IPython Notebook when I was first made to use it for class assignments in College, but now it has finally dawned on me that it fills the same role as those IDEs and became popular because it is similaly well suited for working with data.
Feature Visualization, Engineering and Selection
The project also instructed me to choose a set of features, and to engineer some of my own. In order to get an initial idea of possible promising features and how I could use them to create new features, I computed the correlation of each feature to the Person of Interest classification
Step5: The results indicated that 'exercised_stock_options', 'total_stock_value', and 'bonus' are the most promising features. Just for fun, I went ahead and plotted these features to see if I could visually verify their significance
Step6: As well as one that is not strongly correlated
Step7: The data and plots above indicated that the exercised_stock_options, total_stock_value, and restricted_stock, and to a lesser extent to payment related information (total_payments, salary, bonus, and expenses), are all correlated to Persons of Interest. Therefore, I created new features as sums and ratios of these ones. Working with Pandas made this incredibely easy due to vectorized operations, and though Numpy could similarly make this easy I think Pandas' Dataframe construct makes it especially easy.
It was also easy to fix any problems with the data before starting to train machine learning models. In order to use the data for evaluation and training, I replaced null values with the mean of each feature so as to be able to use the dataset with Scikit-learn. I also scaled all features to a range of 1-0, to better work with Support Vector Machines
Step8: Then, I scored features using Scikit-learn's SelectKBest to get an ordering of them to test with multiple algorithms afterward. Pandas Dataframes can be used directly with Scikit-learn, which is another great benefit of it
Step9: It appeared that several of my features are among the most useful, as 'poi_email_ratio_to', 'stock_sum', and 'money_total' are all ranked highly. But, since the data is so small I had no need to get rid of any of the features and went ahead with testing several classifiers with several sets of features.
Training and Evaluating Models
Proceding with the project, I selected three algorithms to test and compare
Step10: Then, I could go right back to Pandas to plot the results. Sure, I could do this with matplotlib just as well, but the flexibility and simplicity of the 'plot' function call on a DataFrame makes it much less annoying to use in my opinion. | Python Code:
import matplotlib.pyplot as plt
import matplotlib
import pickle
import pandas as pd
import numpy as np
from IPython.display import display
%matplotlib notebook
enron_data = pickle.load(open("./ud120-projects/final_project/final_project_dataset.pkl", "rb"))
print("Number of people: %d"%len(enron_data.keys()))
print("Number of features per person: %d"%len(list(enron_data.values())[0]))
print("Number of POI: %d"%sum([1 if x['poi'] else 0 for x in enron_data.values()]))
Explanation: The Power of IPython Notebook + Pandas + and Scikit-learn
IPython Notebook, Numpy, Pandas, MongoDB, R — for the better part of a year now, I have been trying out these technologies as part of Udacity's Data Analyst Nanodegree. My undergrad education barely touched on data visualization or more broadly data science, and so I figured being exposed to the aforementioned technologies would be fun. And fun it has been, with R's powerful IDE-powered data mundging and visualization techniques having been particularly revelatory. I learned enough of R to create some complex visualizations, and was impressed by how easy is to import data into its Dataframe representations and then transform and visualize that data. I also thought RStudio's paradigm of continuously intermixed code editing and execution was superior to my habitual workflow of just endlessly cycling between tweaking and executing of Python scripts.
Still, R is a not-quite-general-purpose-language and I hit upon multiple instances in which simple things were hard to do. In such times, I could not help but miss the powers of Python, a language I have tons of experience with and which is about as general purpose as it gets. Luckily, the courses also covered the equivalent of an R implementation for Python: the Python Data Analysis Library, Pandas. This let me use the features of R I now liked — dataframes, powerful plotting methods, elegant methods for transforming data — with Python's lovely syntax and libraries I already knew and loved. And soon I got to do just that, using both Pandas and the supremely good Machine Learning package Scikit-learn for the final project of Udacity's Intro to Machine Learning Course. Not only that, but I also used IPython Notebook for RStudio-esque intermixed code editing and execution and nice PDF output.
I had such a nice experience with this combination of tools that I decided to dedicate a post to it, and what follows is mostly a summation of that experience. Reading it should be sufficient to get a general idea for why these tools are useful, whereas a much more detailed introdution and tutorial for Pandas can be found elsewhere (for instance here). Incidentally, this whole post was written in IPython Notebook and the source of that can be found here with the produced HTML here.
Data Summarization
First, a bit about the project. The task was to first explore and clean a given dataset, and then train classification models using it. The dataset contained dozens of features about roughly 150 important employees from the notoriously corrupt company Enron, witch were classified as either a "Person of Interest" or not based on the outcome of investigations into Enron's corruption. It's a tiny dataset and not what I would have chosen, but such were the instructions. The data was provided in a bunch of Python dictionaries, and at first I just used a Python script to change it into a CSV and started exploring it in RStudio. But, it soon dawned on me that I would be much better off just working entirely in Python, and the following code is taken verbatim from my final project submission.
And so, the code. Following some imports and a '%matplotlib notebook' comment to allow plotting within IPython, I loaded the data using pickle and printed out some basic things about it (not yet using Pandas):
End of explanation
df = pd.DataFrame.from_dict(enron_data)
del df['TOTAL']
df = df.transpose()
numeric_df = df.apply(pd.to_numeric, errors='coerce')
del numeric_df['email_address']
numeric_df.describe()
Explanation: But working with this set of dictionaries would not be nearly as fast or easy as a Pandas dataframe, so I soon converted it to that and went ahead and summarized all the features with a single method call:
End of explanation
del numeric_df['loan_advances']
del numeric_df['restricted_stock_deferred']
del numeric_df['director_fees']
std = numeric_df.apply(lambda x: np.abs(x - x.mean()) / x.std())
std = std.fillna(std.mean())
std.describe()
Explanation: Looking through these, I found one instance of a valid outlier - Mark A. Frevert (CEO of Enron), and removed him from the dataset.
I should emphasize the benefits of doing all this in IPython Notebook. Being able to tweak parts of the code without reexecuting all of it and reloading all the data made iterating on ideas much faster, and iterating on ideas fast is essential for exploratory data analysis and development of machine learned models. It's no accident that the Matlab IDE and RStudio, both tools commonly used in the sciences for data processing and analysis, have essentially the same structure. I did not understand the benefits of IPython Notebook when I was first made to use it for class assignments in College, but now it has finally dawned on me that it fills the same role as those IDEs and became popular because it is similaly well suited for working with data.
End of explanation
outliers = std.apply(lambda x: x > 5).any(axis=1)
outlier_df = pd.DataFrame(index=numeric_df[outliers].index)
for col in numeric_df.columns:
outlier_df[str((col,col+'_std'))] = list(zip(numeric_df[outliers][col],std[outliers][col]))
display(outlier_df)
numeric_df.drop('FREVERT MARK A',inplace=True)
df.drop('FREVERT MARK A',inplace=True)
Explanation: This result suggested that most features have large outliers (larger than 3 standard deviations). In order to be careful not to remove any useful data, I manually inspected all rows with large outliers to see any values that seem appropriate for removal:
End of explanation
corr = numeric_df.corr()
print('\nCorrelations between features to POI:\n ' +str(corr['poi']))
Explanation: Looking through these, I found one instance of a valid outlier - Mark A. Frevert (CEO of Enron), and removed him from the dataset.
I should emphasize the benefits of doing all this in IPython Notebook. Being able to tweak parts of the code without reexecuting all of it and reloading all the data made iterating on ideas much faster, and iterating on ideas fast is essential for exploratory data analysis and development of machine learned models. It's no accident that the Matlab IDE and RStudio, both tools commonly used in the sciences for data processing and analysis, have essentially the same structure. I did not understand the benefits of IPython Notebook when I was first made to use it for class assignments in College, but now it has finally dawned on me that it fills the same role as those IDEs and became popular because it is similaly well suited for working with data.
Feature Visualization, Engineering and Selection
The project also instructed me to choose a set of features, and to engineer some of my own. In order to get an initial idea of possible promising features and how I could use them to create new features, I computed the correlation of each feature to the Person of Interest classification:
End of explanation
numeric_df.hist(column='exercised_stock_options',by='poi',bins=25,sharex=True,sharey=True)
plt.suptitle("exercised_stock_options by POI")
numeric_df.hist(column='total_stock_value',by='poi',bins=25,sharex=True,sharey=True)
plt.suptitle("total_stock_value by POI")
numeric_df.hist(column='bonus',by='poi',bins=25,sharex=True,sharey=True)
plt.suptitle("bonus by POI")
Explanation: The results indicated that 'exercised_stock_options', 'total_stock_value', and 'bonus' are the most promising features. Just for fun, I went ahead and plotted these features to see if I could visually verify their significance:
End of explanation
numeric_df.hist(column='to_messages',by='poi',bins=25,sharex=True,sharey=True)
plt.suptitle("to_messages by POI")
Explanation: As well as one that is not strongly correlated:
End of explanation
#Get rid of label
del numeric_df['poi']
poi = df['poi']
#Create new features
numeric_df['stock_sum'] = numeric_df['exercised_stock_options'] +\
numeric_df['total_stock_value'] +\
numeric_df['restricted_stock']
numeric_df['stock_ratio'] = numeric_df['exercised_stock_options']/numeric_df['total_stock_value']
numeric_df['money_total'] = numeric_df['salary'] +\
numeric_df['bonus'] -\
numeric_df['expenses']
numeric_df['money_ratio'] = numeric_df['bonus']/numeric_df['salary']
numeric_df['email_ratio'] = numeric_df['from_messages']/(numeric_df['to_messages']+numeric_df['from_messages'])
numeric_df['poi_email_ratio_from'] = numeric_df['from_poi_to_this_person']/numeric_df['to_messages']
numeric_df['poi_email_ratio_to'] = numeric_df['from_this_person_to_poi']/numeric_df['from_messages']
#Feel in NA values with 'marker' value outside range of real values
numeric_df = numeric_df.fillna(numeric_df.mean())
#Scale to 1-0
numeric_df = (numeric_df-numeric_df.min())/(numeric_df.max()-numeric_df.min())
Explanation: The data and plots above indicated that the exercised_stock_options, total_stock_value, and restricted_stock, and to a lesser extent to payment related information (total_payments, salary, bonus, and expenses), are all correlated to Persons of Interest. Therefore, I created new features as sums and ratios of these ones. Working with Pandas made this incredibely easy due to vectorized operations, and though Numpy could similarly make this easy I think Pandas' Dataframe construct makes it especially easy.
It was also easy to fix any problems with the data before starting to train machine learning models. In order to use the data for evaluation and training, I replaced null values with the mean of each feature so as to be able to use the dataset with Scikit-learn. I also scaled all features to a range of 1-0, to better work with Support Vector Machines:
End of explanation
from sklearn.feature_selection import SelectKBest
selector = SelectKBest()
selector.fit(numeric_df,poi.tolist())
scores = {numeric_df.columns[i]:selector.scores_[i] for i in range(len(numeric_df.columns))}
sorted_features = sorted(scores,key=scores.get, reverse=True)
for feature in sorted_features:
print('Feature %s has value %f'%(feature,scores[feature]))
Explanation: Then, I scored features using Scikit-learn's SelectKBest to get an ordering of them to test with multiple algorithms afterward. Pandas Dataframes can be used directly with Scikit-learn, which is another great benefit of it:
End of explanation
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.grid_search import RandomizedSearchCV, GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import precision_score, recall_score, accuracy_score
from sklearn.cross_validation import StratifiedShuffleSplit
import scipy
import warnings
warnings.filterwarnings('ignore')
gnb_clf = GridSearchCV(GaussianNB(),{})
#No params to tune for for linear bayes, use for convenience
svc_clf = SVC()
svc_search_params = {'C': scipy.stats.expon(scale=1),
'gamma': scipy.stats.expon(scale=.1),
'kernel': ['linear','poly','rbf'],
'class_weight':['balanced',None]}
svc_search = RandomizedSearchCV(svc_clf,
param_distributions=svc_search_params,
n_iter=25)
tree_clf = DecisionTreeClassifier()
tree_search_params = {'criterion':['gini','entropy'],
'max_leaf_nodes':[None,25,50,100,1000],
'min_samples_split':[2,3,4],
'max_features':[0.25,0.5,0.75,1.0]}
tree_search = GridSearchCV(tree_clf,
tree_search_params,
scoring='recall')
search_methods = [gnb_clf,svc_search,tree_search]
average_accuracies = [[0],[0],[0]]
average_precision = [[0],[0],[0]]
average_recall = [[0],[0],[0]]
num_splits = 10
train_split = 0.9
indices = list(StratifiedShuffleSplit(poi.tolist(),
num_splits,
test_size=1-train_split,
random_state=0))
best_features = None
max_score = 0
best_classifier = None
num_features = 0
for num_features in range(1,len(sorted_features)+1):
features = sorted_features[:num_features]
feature_df = numeric_df[features]
for classifier_idx in range(3):
sum_values = [0,0,0]
#Only do parameter search once, too wasteful to do a ton
search_methods[classifier_idx].fit(feature_df.iloc[indices[0][0],:],
poi[indices[0][0]].tolist())
classifier = search_methods[classifier_idx].best_estimator_
for split_idx in range(num_splits):
train_indices, test_indices = indices[split_idx]
train_data = (feature_df.iloc[train_indices,:],poi[train_indices].tolist())
test_data = (feature_df.iloc[test_indices,:],poi[test_indices].tolist())
classifier.fit(train_data[0],train_data[1])
predicted = classifier.predict(test_data[0])
sum_values[0]+=accuracy_score(predicted,test_data[1])
sum_values[1]+=precision_score(predicted,test_data[1])
sum_values[2]+=recall_score(predicted,test_data[1])
avg_acc,avg_prs,avg_recall = [val/num_splits for val in sum_values]
average_accuracies[classifier_idx].append(avg_acc)
average_precision[classifier_idx].append(avg_prs)
average_recall[classifier_idx].append(avg_recall)
score = (avg_prs+avg_recall)/2
if score>max_score and avg_prs>0.3 and avg_recall>0.3:
max_score = score
best_features = features
best_classifier = search_methods[classifier_idx].best_estimator_
print('Best classifier found is %s \n\
with score (recall+precision)/2 of %f\n\
and feature set %s'%(str(best_classifier),max_score,best_features))
Explanation: It appeared that several of my features are among the most useful, as 'poi_email_ratio_to', 'stock_sum', and 'money_total' are all ranked highly. But, since the data is so small I had no need to get rid of any of the features and went ahead with testing several classifiers with several sets of features.
Training and Evaluating Models
Proceding with the project, I selected three algorithms to test and compare: Naive Bayes, Decision Trees, and Support Vector Machines. Naive Bayes is a good baseline for any ML task, and the other two fit well into the task of binary classification with many features and can both be automatically tuned using sklearn classes. A word on SkLearn: it is simply a very well designed Machine Learning toolkit, with great compatibility with Numpy (and therefore also Pandas) and an elegant and smart API structure that makes trying out different models and evaluating features and just about anything one might want short of Deep Learning easy.
I think the code that follows will attest to that. I tested those three algorithms with a variable number of features, from one to all of them ordered by the SelectKBest scoring. Because the data is so small, I could afford an extensive validation scheme and did multiple random splits of the data into training and testing to get an average that best indicated the strength of each algorithm. I also went ahead and evaluated precision and recall besides accuracy, since those were to be the metric of performance. And all it took to do all that is maybe 50 lines of code:
End of explanation
results = pd.DataFrame.from_dict({'Naive Bayes': average_accuracies[0],
'SVC':average_accuracies[1],
'Decision Tree':average_accuracies[2]})
results.plot(xlim=(1,len(sorted_features)-1),ylim=(0,1))
plt.suptitle("Classifier accuracy by # of features")
results = pd.DataFrame.from_dict({'Naive Bayes': average_precision[0],
'SVC':average_precision[1],
'Decision Tree':average_precision[2]})
results.plot(xlim=(1,len(sorted_features)-1),ylim=(0,1))
plt.suptitle("Classifier precision by # of features")
results = pd.DataFrame.from_dict({'Naive Bayes': average_recall[0],
'SVC':average_recall[1],
'Decision Tree':average_recall[2]})
results.plot(xlim=(1,len(sorted_features)-1),ylim=(0,1))
plt.suptitle("Classifier recall by # of features")
Explanation: Then, I could go right back to Pandas to plot the results. Sure, I could do this with matplotlib just as well, but the flexibility and simplicity of the 'plot' function call on a DataFrame makes it much less annoying to use in my opinion.
End of explanation |
591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
you should use GPU but if it is busy then you always can fall back to your CPU
Step1: Use indexing of tokens from vocabulary-embedding this does not clip the indexes of the words to vocab_size.
Use the index of outside words to replace them with several oov words (oov , oov0, oov1, ...) that appear in the same description and headline. This will allow headline generator to replace the oov with the same word in the description
Step2: implement the "simple" model from http
Step3: input data (X) is made from maxlend description words followed by eos
followed by headline words followed by eos
if description is shorter than maxlend it will be left padded with empty
if entire data is longer than maxlen it will be clipped and if it is shorter it will be right padded with empty.
labels (Y) are the headline words followed by eos and clipped or padded to maxlenh
In other words the input is made from a maxlend half in which the description is padded from the left
and a maxlenh half in which eos is followed by a headline followed by another eos if there is enough space.
The labels match only the second half and
the first label matches the eos at the start of the second half (following the description in the first half)
Step4: the out of the first activation_rnn_size nodes from the top LSTM layer will be used for activation and the rest will be used to select predicted word
Step5: read word embedding
Step6: when printing mark words outside vocabulary with ^ at their end
Step7: Model
Step8: start with a standaed stacked LSTM
Step9: A special layer that reduces the input just to its headline part (second half).
For each word in this part it concatenate the output of the previous layer (RNN)
with a weighted average of the outputs of the description part.
In this only the last rnn_size - activation_rnn_size are used from each output.
The first activation_rnn_size output is used to computer the weights for the averaging.
Step10: Load
Step12: Test
Step17: Sample generation
this section is only used to generate examples. you can skip it if you just want to understand how the training works
Step21: Data generator
Data generator generates batches of inputs and outputs/labels for training. The inputs are each made from two parts. The first maxlend words are the original description, followed by eos followed by the headline which we want to predict, except for the last word in the headline which is always eos and then empty padding until maxlen words.
For each, input, the output is the headline words (without the start eos but with the ending eos) padded with empty words up to maxlenh words. The output is also expanded to be y-hot encoding of each word.
To be more realistic, the second part of the input should be the result of generation and not the original headline.
Instead we will flip just nflips words to be from the generator, but even this is too hard and instead
implement flipping in a naive way (which consumes less time.) Using the full input (description + eos + headline) generate predictions for outputs. For nflips random words from the output, replace the original word with the word with highest probability from the prediction.
Step22: test fliping
Step23: check that valgen repeats itself after nb_batches
Step24: Train | Python Code:
import os
# os.environ['THEANO_FLAGS'] = 'device=cpu,floatX=float32'
import keras
keras.__version__
Explanation: you should use GPU but if it is busy then you always can fall back to your CPU
End of explanation
FN0 = 'vocabulary-embedding'
Explanation: Use indexing of tokens from vocabulary-embedding this does not clip the indexes of the words to vocab_size.
Use the index of outside words to replace them with several oov words (oov , oov0, oov1, ...) that appear in the same description and headline. This will allow headline generator to replace the oov with the same word in the description
End of explanation
FN1 = 'train'
Explanation: implement the "simple" model from http://arxiv.org/pdf/1512.01712v1.pdf
you can start training from a pre-existing model. This allows you to run this notebooks many times, each time using different parameters and passing the end result of one run to be the input of the next.
I've started with maxlend=0 (see below) in which the description was ignored. I then moved to start with a high LR and the manually lowering it. I also started with nflips=0 in which the original headlines is used as-is and slowely moved to 12 in which half the input headline was fliped with the predictions made by the model (the paper used fixed 10%)
End of explanation
maxlend=25 # 0 - if we dont want to use description at all
maxlenh=25
maxlen = maxlend + maxlenh
rnn_size = 512 # must be same as 160330-word-gen
rnn_layers = 3 # match FN1
batch_norm=False
Explanation: input data (X) is made from maxlend description words followed by eos
followed by headline words followed by eos
if description is shorter than maxlend it will be left padded with empty
if entire data is longer than maxlen it will be clipped and if it is shorter it will be right padded with empty.
labels (Y) are the headline words followed by eos and clipped or padded to maxlenh
In other words the input is made from a maxlend half in which the description is padded from the left
and a maxlenh half in which eos is followed by a headline followed by another eos if there is enough space.
The labels match only the second half and
the first label matches the eos at the start of the second half (following the description in the first half)
End of explanation
activation_rnn_size = 40 if maxlend else 0
# training parameters
seed=42
p_W, p_U, p_dense, p_emb, weight_decay = 0, 0, 0, 0, 0
optimizer = 'adam'
LR = 1e-4
batch_size=64
nflips=10
nb_train_samples = 30000
nb_val_samples = 3000
Explanation: the out of the first activation_rnn_size nodes from the top LSTM layer will be used for activation and the rest will be used to select predicted word
End of explanation
import cPickle as pickle
with open('data/%s.pkl'%FN0, 'rb') as fp:
embedding, idx2word, word2idx, glove_idx2idx = pickle.load(fp)
vocab_size, embedding_size = embedding.shape
with open('data/%s.data.pkl'%FN0, 'rb') as fp:
X, Y = pickle.load(fp)
nb_unknown_words = 10
print 'number of examples',len(X),len(Y)
print 'dimension of embedding space for words',embedding_size
print 'vocabulary size', vocab_size, 'the last %d words can be used as place holders for unknown/oov words'%nb_unknown_words
print 'total number of different words',len(idx2word), len(word2idx)
print 'number of words outside vocabulary which we can substitue using glove similarity', len(glove_idx2idx)
print 'number of words that will be regarded as unknonw(unk)/out-of-vocabulary(oov)',len(idx2word)-vocab_size-len(glove_idx2idx)
for i in range(nb_unknown_words):
idx2word[vocab_size-1-i] = '<%d>'%i
Explanation: read word embedding
End of explanation
oov0 = vocab_size-nb_unknown_words
for i in range(oov0, len(idx2word)):
idx2word[i] = idx2word[i]+'^'
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=nb_val_samples, random_state=seed)
len(X_train), len(Y_train), len(X_test), len(Y_test)
del X
del Y
empty = 0
eos = 1
idx2word[empty] = '_'
idx2word[eos] = '~'
import numpy as np
from keras.preprocessing import sequence
from keras.utils import np_utils
import random, sys
def prt(label, x):
print label+':',
for w in x:
print idx2word[w],
print
i = 334
prt('H',Y_train[i])
prt('D',X_train[i])
i = 334
prt('H',Y_test[i])
prt('D',X_test[i])
Explanation: when printing mark words outside vocabulary with ^ at their end
End of explanation
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout, RepeatVector, Merge
from keras.layers.wrappers import TimeDistributed
from keras.layers.recurrent import LSTM
from keras.layers.embeddings import Embedding
from keras.regularizers import l2
# seed weight initialization
random.seed(seed)
np.random.seed(seed)
regularizer = l2(weight_decay) if weight_decay else None
Explanation: Model
End of explanation
model = Sequential()
model.add(Embedding(vocab_size, embedding_size,
input_length=maxlen,
W_regularizer=regularizer, dropout=p_emb, weights=[embedding], mask_zero=True,
name='embedding_1'))
for i in range(rnn_layers):
lstm = LSTM(rnn_size, return_sequences=True, # batch_norm=batch_norm,
W_regularizer=regularizer, U_regularizer=regularizer,
b_regularizer=regularizer, dropout_W=p_W, dropout_U=p_U,
name='lstm_%d'%(i+1)
)
model.add(lstm)
model.add(Dropout(p_dense,name='dropout_%d'%(i+1)))
Explanation: start with a standaed stacked LSTM
End of explanation
from keras.layers.core import Lambda
import keras.backend as K
def simple_context(X, mask, n=activation_rnn_size, maxlend=maxlend, maxlenh=maxlenh):
desc, head = X[:,:maxlend,:], X[:,maxlend:,:]
head_activations, head_words = head[:,:,:n], head[:,:,n:]
desc_activations, desc_words = desc[:,:,:n], desc[:,:,n:]
# RTFM http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.batched_tensordot
# activation for every head word and every desc word
activation_energies = K.batch_dot(head_activations, desc_activations, axes=(2,2))
# make sure we dont use description words that are masked out
activation_energies = activation_energies + -1e20*K.expand_dims(1.-K.cast(mask[:, :maxlend],'float32'),1)
# for every head word compute weights for every desc word
activation_energies = K.reshape(activation_energies,(-1,maxlend))
activation_weights = K.softmax(activation_energies)
activation_weights = K.reshape(activation_weights,(-1,maxlenh,maxlend))
# for every head word compute weighted average of desc words
desc_avg_word = K.batch_dot(activation_weights, desc_words, axes=(2,1))
return K.concatenate((desc_avg_word, head_words))
class SimpleContext(Lambda):
def __init__(self,**kwargs):
super(SimpleContext, self).__init__(simple_context,**kwargs)
self.supports_masking = True
def compute_mask(self, input, input_mask=None):
return input_mask[:, maxlend:]
def get_output_shape_for(self, input_shape):
nb_samples = input_shape[0]
n = 2*(rnn_size - activation_rnn_size)
return (nb_samples, maxlenh, n)
if activation_rnn_size:
model.add(SimpleContext(name='simplecontext_1'))
model.add(TimeDistributed(Dense(vocab_size,
W_regularizer=regularizer, b_regularizer=regularizer,
name = 'timedistributed_1')))
model.add(Activation('softmax', name='activation_1'))
from keras.optimizers import Adam, RMSprop # usually I prefer Adam but article used rmsprop
# opt = Adam(lr=LR) # keep calm and reduce learning rate
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
%%javascript
// new Audio("http://www.soundjay.com/button/beep-09.wav").play ()
K.set_value(model.optimizer.lr,np.float32(LR))
def str_shape(x):
return 'x'.join(map(str,x.shape))
def inspect_model(model):
for i,l in enumerate(model.layers):
print i, 'cls=%s name=%s'%(type(l).__name__, l.name)
weights = l.get_weights()
for weight in weights:
print str_shape(weight),
print
inspect_model(model)
Explanation: A special layer that reduces the input just to its headline part (second half).
For each word in this part it concatenate the output of the previous layer (RNN)
with a weighted average of the outputs of the description part.
In this only the last rnn_size - activation_rnn_size are used from each output.
The first activation_rnn_size output is used to computer the weights for the averaging.
End of explanation
if FN1:
model.load_weights('data/%s.hdf5'%FN1)
Explanation: Load
End of explanation
def lpadd(x, maxlend=maxlend, eos=eos):
left (pre) pad a description to maxlend and then add eos.
The eos is the input to predicting the first word in the headline
assert maxlend >= 0
if maxlend == 0:
return [eos]
n = len(x)
if n > maxlend:
x = x[-maxlend:]
n = maxlend
return [empty]*(maxlend-n) + x + [eos]
samples = [lpadd([3]*26)]
# pad from right (post) so the first maxlend will be description followed by headline
data = sequence.pad_sequences(samples, maxlen=maxlen, value=empty, padding='post', truncating='post')
np.all(data[:,maxlend] == eos)
data.shape,map(len, samples)
probs = model.predict(data, verbose=0, batch_size=1)
probs.shape
Explanation: Test
End of explanation
# variation to https://github.com/ryankiros/skip-thoughts/blob/master/decoding/search.py
def beamsearch(predict, start=[empty]*maxlend + [eos],
k=1, maxsample=maxlen, use_unk=True, empty=empty, eos=eos, temperature=1.0):
return k samples (beams) and their NLL scores, each sample is a sequence of labels,
all samples starts with an `empty` label and end with `eos` or truncated to length of `maxsample`.
You need to supply `predict` which returns the label probability of each sample.
`use_unk` allow usage of `oov` (out-of-vocabulary) label in samples
def sample(energy, n, temperature=temperature):
sample at most n elements according to their energy
n = min(n,len(energy))
prb = np.exp(-np.array(energy) / temperature )
res = []
for i in xrange(n):
z = np.sum(prb)
r = np.argmax(np.random.multinomial(1, prb/z, 1))
res.append(r)
prb[r] = 0. # make sure we select each element only once
return res
dead_k = 0 # samples that reached eos
dead_samples = []
dead_scores = []
live_k = 1 # samples that did not yet reached eos
live_samples = [list(start)]
live_scores = [0]
while live_k:
# for every possible live sample calc prob for every possible label
probs = predict(live_samples, empty=empty)
# total score for every sample is sum of -log of word prb
cand_scores = np.array(live_scores)[:,None] - np.log(probs)
cand_scores[:,empty] = 1e20
if not use_unk:
for i in range(nb_unknown_words):
cand_scores[:,vocab_size - 1 - i] = 1e20
live_scores = list(cand_scores.flatten())
# find the best (lowest) scores we have from all possible dead samples and
# all live samples and all possible new words added
scores = dead_scores + live_scores
ranks = sample(scores, k)
n = len(dead_scores)
ranks_dead = [r for r in ranks if r < n]
ranks_live = [r - n for r in ranks if r >= n]
dead_scores = [dead_scores[r] for r in ranks_dead]
dead_samples = [dead_samples[r] for r in ranks_dead]
live_scores = [live_scores[r] for r in ranks_live]
# append the new words to their appropriate live sample
voc_size = probs.shape[1]
live_samples = [live_samples[r//voc_size]+[r%voc_size] for r in ranks_live]
# live samples that should be dead are...
# even if len(live_samples) == maxsample we dont want it dead because we want one
# last prediction out of it to reach a headline of maxlenh
zombie = [s[-1] == eos or len(s) > maxsample for s in live_samples]
# add zombies to the dead
dead_samples += [s for s,z in zip(live_samples,zombie) if z]
dead_scores += [s for s,z in zip(live_scores,zombie) if z]
dead_k = len(dead_samples)
# remove zombies from the living
live_samples = [s for s,z in zip(live_samples,zombie) if not z]
live_scores = [s for s,z in zip(live_scores,zombie) if not z]
live_k = len(live_samples)
return dead_samples + live_samples, dead_scores + live_scores
# !pip install python-Levenshtein
def keras_rnn_predict(samples, empty=empty, model=model, maxlen=maxlen):
for every sample, calculate probability for every possible label
you need to supply your RNN model and maxlen - the length of sequences it can handle
sample_lengths = map(len, samples)
assert all(l > maxlend for l in sample_lengths)
assert all(l[maxlend] == eos for l in samples)
# pad from right (post) so the first maxlend will be description followed by headline
data = sequence.pad_sequences(samples, maxlen=maxlen, value=empty, padding='post', truncating='post')
probs = model.predict(data, verbose=0, batch_size=batch_size)
return np.array([prob[sample_length-maxlend-1] for prob, sample_length in zip(probs, sample_lengths)])
def vocab_fold(xs):
convert list of word indexes that may contain words outside vocab_size to words inside.
If a word is outside, try first to use glove_idx2idx to find a similar word inside.
If none exist then replace all accurancies of the same unknown word with <0>, <1>, ...
xs = [x if x < oov0 else glove_idx2idx.get(x,x) for x in xs]
# the more popular word is <0> and so on
outside = sorted([x for x in xs if x >= oov0])
# if there are more than nb_unknown_words oov words then put them all in nb_unknown_words-1
outside = dict((x,vocab_size-1-min(i, nb_unknown_words-1)) for i, x in enumerate(outside))
xs = [outside.get(x,x) for x in xs]
return xs
def vocab_unfold(desc,xs):
# assume desc is the unfolded version of the start of xs
unfold = {}
for i, unfold_idx in enumerate(desc):
fold_idx = xs[i]
if fold_idx >= oov0:
unfold[fold_idx] = unfold_idx
return [unfold.get(x,x) for x in xs]
import sys
import Levenshtein
def gensamples(skips=2, k=10, batch_size=batch_size, short=True, temperature=1., use_unk=True):
i = random.randint(0,len(X_test)-1)
print 'HEAD:',' '.join(idx2word[w] for w in Y_test[i][:maxlenh])
print 'DESC:',' '.join(idx2word[w] for w in X_test[i][:maxlend])
sys.stdout.flush()
print 'HEADS:'
x = X_test[i]
samples = []
if maxlend == 0:
skips = [0]
else:
skips = range(min(maxlend,len(x)), max(maxlend,len(x)), abs(maxlend - len(x)) // skips + 1)
for s in skips:
start = lpadd(x[:s])
fold_start = vocab_fold(start)
sample, score = beamsearch(predict=keras_rnn_predict, start=fold_start, k=k, temperature=temperature, use_unk=use_unk)
assert all(s[maxlend] == eos for s in sample)
samples += [(s,start,scr) for s,scr in zip(sample,score)]
samples.sort(key=lambda x: x[-1])
codes = []
for sample, start, score in samples:
code = ''
words = []
sample = vocab_unfold(start, sample)[len(start):]
for w in sample:
if w == eos:
break
words.append(idx2word[w])
code += chr(w//(256*256)) + chr((w//256)%256) + chr(w%256)
if short:
distance = min([100] + [-Levenshtein.jaro(code,c) for c in codes])
if distance > -0.6:
print score, ' '.join(words)
# print '%s (%.2f) %f'%(' '.join(words), score, distance)
else:
print score, ' '.join(words)
codes.append(code)
gensamples(skips=2, batch_size=batch_size, k=10, temperature=1.)
Explanation: Sample generation
this section is only used to generate examples. you can skip it if you just want to understand how the training works
End of explanation
def flip_headline(x, nflips=None, model=None, debug=False):
given a vectorized input (after `pad_sequences`) flip some of the words in the second half (headline)
with words predicted by the model
if nflips is None or model is None or nflips <= 0:
return x
batch_size = len(x)
assert np.all(x[:,maxlend] == eos)
probs = model.predict(x, verbose=0, batch_size=batch_size)
x_out = x.copy()
for b in range(batch_size):
# pick locations we want to flip
# 0...maxlend-1 are descriptions and should be fixed
# maxlend is eos and should be fixed
flips = sorted(random.sample(xrange(maxlend+1,maxlen), nflips))
if debug and b < debug:
print b,
for input_idx in flips:
if x[b,input_idx] == empty or x[b,input_idx] == eos:
continue
# convert from input location to label location
# the output at maxlend (when input is eos) is feed as input at maxlend+1
label_idx = input_idx - (maxlend+1)
prob = probs[b, label_idx]
w = prob.argmax()
if w == empty: # replace accidental empty with oov
w = oov0
if debug and b < debug:
print '%s => %s'%(idx2word[x_out[b,input_idx]],idx2word[w]),
x_out[b,input_idx] = w
if debug and b < debug:
print
return x_out
def conv_seq_labels(xds, xhs, nflips=None, model=None, debug=False):
description and hedlines are converted to padded input vectors. headlines are one-hot to label
batch_size = len(xhs)
assert len(xds) == batch_size
x = [vocab_fold(lpadd(xd)+xh) for xd,xh in zip(xds,xhs)] # the input does not have 2nd eos
x = sequence.pad_sequences(x, maxlen=maxlen, value=empty, padding='post', truncating='post')
x = flip_headline(x, nflips=nflips, model=model, debug=debug)
y = np.zeros((batch_size, maxlenh, vocab_size))
for i, xh in enumerate(xhs):
xh = vocab_fold(xh) + [eos] + [empty]*maxlenh # output does have a eos at end
xh = xh[:maxlenh]
y[i,:,:] = np_utils.to_categorical(xh, vocab_size)
return x, y
def gen(Xd, Xh, batch_size=batch_size, nb_batches=None, nflips=None, model=None, debug=False, seed=seed):
yield batches. for training use nb_batches=None
for validation generate deterministic results repeating every nb_batches
while training it is good idea to flip once in a while the values of the headlines from the
value taken from Xh to value generated by the model.
c = nb_batches if nb_batches else 0
while True:
xds = []
xhs = []
if nb_batches and c >= nb_batches:
c = 0
new_seed = random.randint(0, sys.maxint)
random.seed(c+123456789+seed)
for b in range(batch_size):
t = random.randint(0,len(Xd)-1)
xd = Xd[t]
s = random.randint(min(maxlend,len(xd)), max(maxlend,len(xd)))
xds.append(xd[:s])
xh = Xh[t]
s = random.randint(min(maxlenh,len(xh)), max(maxlenh,len(xh)))
xhs.append(xh[:s])
# undo the seeding before we yield inorder not to affect the caller
c+= 1
random.seed(new_seed)
yield conv_seq_labels(xds, xhs, nflips=nflips, model=model, debug=debug)
r = next(gen(X_train, Y_train, batch_size=batch_size))
r[0].shape, r[1].shape, len(r)
def test_gen(gen, n=5):
Xtr,Ytr = next(gen)
for i in range(n):
assert Xtr[i,maxlend] == eos
x = Xtr[i,:maxlend]
y = Xtr[i,maxlend:]
yy = Ytr[i,:]
yy = np.where(yy)[1]
prt('L',yy)
prt('H',y)
if maxlend:
prt('D',x)
test_gen(gen(X_train, Y_train, batch_size=batch_size))
Explanation: Data generator
Data generator generates batches of inputs and outputs/labels for training. The inputs are each made from two parts. The first maxlend words are the original description, followed by eos followed by the headline which we want to predict, except for the last word in the headline which is always eos and then empty padding until maxlen words.
For each, input, the output is the headline words (without the start eos but with the ending eos) padded with empty words up to maxlenh words. The output is also expanded to be y-hot encoding of each word.
To be more realistic, the second part of the input should be the result of generation and not the original headline.
Instead we will flip just nflips words to be from the generator, but even this is too hard and instead
implement flipping in a naive way (which consumes less time.) Using the full input (description + eos + headline) generate predictions for outputs. For nflips random words from the output, replace the original word with the word with highest probability from the prediction.
End of explanation
test_gen(gen(X_train, Y_train, nflips=6, model=model, debug=False, batch_size=batch_size))
valgen = gen(X_test, Y_test,nb_batches=3, batch_size=batch_size)
Explanation: test fliping
End of explanation
for i in range(4):
test_gen(valgen, n=1)
Explanation: check that valgen repeats itself after nb_batches
End of explanation
history = {}
traingen = gen(X_train, Y_train, batch_size=batch_size, nflips=nflips, model=model)
valgen = gen(X_test, Y_test, nb_batches=nb_val_samples//batch_size, batch_size=batch_size)
r = next(traingen)
r[0].shape, r[1].shape, len(r)
for iteration in range(500):
print 'Iteration', iteration
h = model.fit_generator(traingen, samples_per_epoch=nb_train_samples,
nb_epoch=1, validation_data=valgen, nb_val_samples=nb_val_samples
)
for k,v in h.history.iteritems():
history[k] = history.get(k,[]) + v
with open('data/%s.history.pkl'%FN,'wb') as fp:
pickle.dump(history,fp,-1)
model.save_weights('data/%s.hdf5'%FN, overwrite=True)
gensamples(batch_size=batch_size)
Explanation: Train
End of explanation |
592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<a href="https
Step1: Then, loading the dataset is a one-liner
Step2: The structure of the boston object is identical to the iris object. We can get more information about the dataset by looking at the fields of the boston object
Step3: The dataset contains a total of 506 data points, each of which has 13 features
Step4: Of course, we have only a single target value, which is the housing price
Step5: Training the model
Believe it or not, OpenCV does not offer any good implementation of linear regression.
Some people online say that you can use cv2.fitLine, but that is different. This is a
perfect opportunity to get familiar with scikit-learn's API
Step6: In the preceding command, we want to split the data into training and test sets. We are free
to make the split as we see fit, but usually it is a good idea to reserve between 10 percent
and 30 percent for testing. Here, we choose 10 percent, using the test_size argument
Step7: In scikit-learn, the train function is called fit, but otherwise behaves exactly the same as
in OpenCV
Step8: We can look at the mean squared error of our predictions by comparing the true housing
prices, y_train, to our predictions, linreg.predict(X_train)
Step9: The score method of the linreg object returns the coefficient of determination (R
squared)
Step10: Testing the model
In order to test the generalization performance of the model, we calculate the mean
squared error on the test data
Step11: We note that the mean squared error is a little lower on the test set than the training set.
This is good news, as we care mostly about the test error. However, from these numbers, it
is really hard to understand how good the model really is. Perhaps it's better to plot the
data
Step12: This makes more sense! Here we see the ground truth housing prices for all test samples
in blue and our predicted housing prices in red. Pretty close, if you ask me. It is interesting
to note though that the model tends to be off the most for really high or really low housing
prices, such as the peak values of data point 12, 18, and 42. We can formalize the amount of
variance in the data that we were able to explain by calculating $R^2$ | Python Code:
import numpy as np
import cv2
from sklearn import datasets
from sklearn import metrics
from sklearn import model_selection
from sklearn import linear_model
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.rcParams.update({'font.size': 16})
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Understanding the k-NN Classifier | Contents | Applying Lasso and Ridge Regression >
Using Regression Models to Predict Continuous Outcomes
Now let's turn our attention to a regression problem. Regression is all about predicting continuous outcomes rather than predicting
discrete class labels.
Using linear regression to predict Boston housing prices
The easiest regression model is called linear regression. The idea behind linear regression is
to describe a target variable (such as Boston house pricing) with a linear combination of
features.
If you want to understand how the math behind linear regression works, please refer to the book (p.65ff.).
To get a better understanding of linear regression, we want to build a simple model that can
be applied to one of the most famous machine learning datasets known as the Boston
housing prices dataset. Here, the goal is to predict the value of homes in several Boston
neighborhoods in the 1970s, using information such as crime rate, property tax rate,
distance to employment centers, and highway accessibility.
Loading the dataset
We can again thank scikit-learn for easy access to the dataset. We first import all the
necessary modules, as we did earlier:
End of explanation
boston = datasets.load_boston()
Explanation: Then, loading the dataset is a one-liner:
End of explanation
dir(boston)
Explanation: The structure of the boston object is identical to the iris object. We can get more information about the dataset by looking at the fields of the boston object:
- DESCR: Get a description of the data
- data: The actual data, <num_samples x num_features>
- feature_names: The names of the features
- target: The class labels, <num_samples x 1>
- target_names: The names of the class labels
End of explanation
boston.data.shape
Explanation: The dataset contains a total of 506 data points, each of which has 13 features:
End of explanation
boston.target.shape
Explanation: Of course, we have only a single target value, which is the housing price:
End of explanation
linreg = linear_model.LinearRegression()
Explanation: Training the model
Believe it or not, OpenCV does not offer any good implementation of linear regression.
Some people online say that you can use cv2.fitLine, but that is different. This is a
perfect opportunity to get familiar with scikit-learn's API:
End of explanation
X_train, X_test, y_train, y_test = model_selection.train_test_split(
boston.data, boston.target, test_size=0.1, random_state=42
)
Explanation: In the preceding command, we want to split the data into training and test sets. We are free
to make the split as we see fit, but usually it is a good idea to reserve between 10 percent
and 30 percent for testing. Here, we choose 10 percent, using the test_size argument:
End of explanation
linreg.fit(X_train, y_train)
Explanation: In scikit-learn, the train function is called fit, but otherwise behaves exactly the same as
in OpenCV:
End of explanation
metrics.mean_squared_error(y_train, linreg.predict(X_train))
Explanation: We can look at the mean squared error of our predictions by comparing the true housing
prices, y_train, to our predictions, linreg.predict(X_train):
End of explanation
linreg.score(X_train, y_train)
Explanation: The score method of the linreg object returns the coefficient of determination (R
squared):
End of explanation
y_pred = linreg.predict(X_test)
metrics.mean_squared_error(y_test, y_pred)
Explanation: Testing the model
In order to test the generalization performance of the model, we calculate the mean
squared error on the test data:
End of explanation
plt.figure(figsize=(10, 6))
plt.plot(y_test, linewidth=3, label='ground truth')
plt.plot(y_pred, linewidth=3, label='predicted')
plt.legend(loc='best')
plt.xlabel('test data points')
plt.ylabel('target value')
Explanation: We note that the mean squared error is a little lower on the test set than the training set.
This is good news, as we care mostly about the test error. However, from these numbers, it
is really hard to understand how good the model really is. Perhaps it's better to plot the
data:
End of explanation
plt.figure(figsize=(10, 6))
plt.plot(y_test, y_pred, 'o')
plt.plot([-10, 60], [-10, 60], 'k--')
plt.axis([-10, 60, -10, 60])
plt.xlabel('ground truth')
plt.ylabel('predicted')
scorestr = r'R$^2$ = %.3f' % linreg.score(X_test, y_test)
errstr = 'MSE = %.3f' % metrics.mean_squared_error(y_test, y_pred)
plt.text(-5, 50, scorestr, fontsize=12)
plt.text(-5, 45, errstr, fontsize=12);
Explanation: This makes more sense! Here we see the ground truth housing prices for all test samples
in blue and our predicted housing prices in red. Pretty close, if you ask me. It is interesting
to note though that the model tends to be off the most for really high or really low housing
prices, such as the peak values of data point 12, 18, and 42. We can formalize the amount of
variance in the data that we were able to explain by calculating $R^2$:
End of explanation |
593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TensorFlow Addons Losses
Step2: Prepare the Data
Step3: Build the Model
Step4: Train and Evaluate | Python Code:
#@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -U tensorflow-addons
import io
import numpy as np
import tensorflow as tf
import tensorflow_addons as tfa
import tensorflow_datasets as tfds
Explanation: TensorFlow Addons Losses: TripletSemiHardLoss
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/addons/tutorials/losses_triplet"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/losses_triplet.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/losses_triplet.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/losses_triplet.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This notebook will demonstrate how to use the TripletSemiHardLoss function in TensorFlow Addons.
Resources:
FaceNet: A Unified Embedding for Face Recognition and Clustering
Oliver Moindrot's blog does an excellent job of describing the algorithm in detail
TripletLoss
As first introduced in the FaceNet paper, TripletLoss is a loss function that trains a neural network to closely embed features of the same class while maximizing the distance between embeddings of different classes. To do this an anchor is chosen along with one negative and one positive sample.
The loss function is described as a Euclidean distance function:
Where A is our anchor input, P is the positive sample input, N is the negative sample input, and alpha is some margin you use to specify when a triplet has become too "easy" and you no longer want to adjust the weights from it.
SemiHard Online Learning
As shown in the paper, the best results are from triplets known as "Semi-Hard". These are defined as triplets where the negative is farther from the anchor than the positive, but still produces a positive loss. To efficiently find these triplets you utilize online learning and only train from the Semi-Hard examples in each batch.
Setup
End of explanation
def _normalize_img(img, label):
img = tf.cast(img, tf.float32) / 255.
return (img, label)
train_dataset, test_dataset = tfds.load(name="mnist", split=['train', 'test'], as_supervised=True)
# Build your input pipelines
train_dataset = train_dataset.shuffle(1024).batch(32)
train_dataset = train_dataset.map(_normalize_img)
test_dataset = test_dataset.batch(32)
test_dataset = test_dataset.map(_normalize_img)
Explanation: Prepare the Data
End of explanation
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=(28,28,1)),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=2),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation=None), # No activation on final dense layer
tf.keras.layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1)) # L2 normalize embeddings
])
Explanation: Build the Model
End of explanation
# Compile the model
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tfa.losses.TripletSemiHardLoss())
# Train the network
history = model.fit(
train_dataset,
epochs=5)
# Evaluate the network
results = model.predict(test_dataset)
# Save test embeddings for visualization in projector
np.savetxt("vecs.tsv", results, delimiter='\t')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for img, labels in tfds.as_numpy(test_dataset):
[out_m.write(str(x) + "\n") for x in labels]
out_m.close()
try:
from google.colab import files
files.download('vecs.tsv')
files.download('meta.tsv')
except:
pass
Explanation: Train and Evaluate
End of explanation |
594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 1
Step1: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Now that we have the model parameters
Step9: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question
Step10: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question
Step13: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question
Step15: New Model
Step16: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 1: Simple Linear Regression
In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will:
* Use graphlab SArray and SFrame functions to compute important summary statistics
* Write a function to compute the Simple Linear Regression weights using the closed form solution
* Write a function to make predictions of the output given the input feature
* Turn the regression around to predict the input given the output
* Compare two different models for predicting house prices
In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
Explanation: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular:
* Computing the sum of an SArray
* Computing the arithmetic average (mean) of an SArray
* multiplying SArrays by constants
* multiplying SArrays by other SArrays
End of explanation
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
Explanation: As we see we get the same answer both ways
End of explanation
def simple_linear_regression(input_feature, output):
# compute the mean of input_feature and output
x = input_feature
y = output
avg_x = x.mean()
avg_y = y.mean()
n = x.size()
# compute the product of the output and the input_feature and its mean
# compute the squared value of the input_feature and its mean
# use the formula for the slope
x_err = x-avg_x
slope = (y*x_err).sum()/(x*x_err).sum()
# use the formula for the intercept
intercept = y.mean() - x.mean()*slope
return (intercept, slope)
Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2
Build a generic simple linear regression function
Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output.
Complete the following function (or write your own) to compute the simple linear regression slope and intercept:
End of explanation
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1
End of explanation
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
End of explanation
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = intercept + slope * input_feature
return predicted_values
Explanation: Predicting Values
Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept:
End of explanation
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
Explanation: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft?
End of explanation
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predictions = get_regression_predictions(input_feature, intercept, slope)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
resd = predictions-output
# square the residuals and add them up
RSS = (resd*resd).sum()
return(RSS)
Explanation: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope:
End of explanation
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
End of explanation
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data?
End of explanation
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = (output - intercept)/slope
return estimated_feature
Explanation: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
End of explanation
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000?
End of explanation
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
bedrooms_intercept, bedrooms_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
Explanation: New Model: estimate prices from bedrooms
We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame.
Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data!
End of explanation
# Compute RSS when using bedrooms on TEST data:
get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], sqft_intercept, sqft_slope)
# Compute RSS when using squarfeet on TEST data:
get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
Explanation: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case.
End of explanation |
595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Weighted-residual method
Let us consider the equation
$$A u = f\quad \text{in } \Omega$$
For an approximation $u_N$ of $u$, the residual, $R_N$, is defined by
$$R_N \equiv Au_N - f$$
When the residual is made orthogonal to the subspace spanned by a base ${\psi_k}$, we have a weighted-residual method, i.e.,
$$\langle R_N, \psi_k\rangle =0 \quad k=1, 2, \cdots, N$$
Step2: Ritz method
Step4: Bubnov-Galerkin method
The Bubnov-Galerkint methood is a generalization of the Ritz method. As in the Ritz method, it seeks an approximate solution as a linear combination of base functions
$$u_N = \sum_{i=1}^{N} c_i \phi_i\, .$$
In this case, the coefficients $c_i$ are determined from the condition that the residual $R_N$ is orthogonal to the basis functions $\phi_1, \phi_2, \cdots, \phi_N$
Step5: The following cell change the style of the notebook. | Python Code:
from __future__ import division, print_function
import numpy as np
from sympy import *
from sympy.plotting import plot3d
from scipy.linalg import eigh
from scipy.special import jn_zeros as Jn_zeros, jn as Jn
import matplotlib.pyplot as plt
init_session()
%matplotlib inline
plt.style.use("seaborn-notebook")
Explanation: Weighted-residual method
Let us consider the equation
$$A u = f\quad \text{in } \Omega$$
For an approximation $u_N$ of $u$, the residual, $R_N$, is defined by
$$R_N \equiv Au_N - f$$
When the residual is made orthogonal to the subspace spanned by a base ${\psi_k}$, we have a weighted-residual method, i.e.,
$$\langle R_N, \psi_k\rangle =0 \quad k=1, 2, \cdots, N$$
End of explanation
def u_fun(r, m):
Trial function.
c = symbols('c0:%i' % m)
w = (1 - r**2) *sum(c[k]*r**(2*k) for k in range (0, m))
return w, c
r = symbols('r')
m = 7
u, coef = u_fun(r, m)
T_inte = u**2
U_inte = diff(u, r)**2
display(U_inte)
display(T_inte)
U = integrate(expand(r*U_inte), (r, 0, 1))
T = integrate(expand(r*T_inte), (r, 0, 1))
K = Matrix(m, m, lambda ii, jj: diff(U, coef[ii], coef[jj]))
K
M = Matrix(m, m, lambda ii, jj: diff(T, coef[ii], coef[jj]))
M
Kn = np.array(K).astype(np.float64)
Mn = np.array(M).astype(np.float64)
vals, vecs = eigh(Kn, Mn, eigvals=(0, m-1))
np.sqrt(vals)
lam = Jn_zeros(0, m)
r_vec = np.linspace(0, 1, 60)
plt.figure(figsize=(14, 5))
ax1 = plt.subplot(1, 2, 1)
ax2 = plt.subplot(1, 2, 2)
for num in range(5):
u_num = lambdify((r), u.subs({coef[kk]: vecs[kk, num] for kk in range(m)}),
"numpy")
ax1.plot(r_vec, u_num(r_vec)/u_num(0))
ax1.set_title("Approximated solution")
ax2.plot(r_vec, Jn(0, lam[num]*r_vec), label=r"$m=%i$"%num)
ax2.set_title("Exact solution")
plt.legend(loc="best", framealpha=0);
Explanation: Ritz method: Axisymmetric modes in a circular membrane
End of explanation
def u_fun(r, m):
Trial function.
c = symbols('c0:%i' % m)
w = (1 - r**2) *sum(c[k]*r**(2*k) for k in range (0, m))
return w, c
r = symbols('r')
m = 7
u, coef = u_fun(r, m)
u
U = -integrate(diff(r*diff(u, r), r)*u, (r, 0, 1))
T = integrate(r*u**2, (r, 0, 1))
K = Matrix(m, m, lambda ii, jj: diff(U, coef[ii], coef[jj]))
K
M = Matrix(m, m, lambda ii, jj: diff(T, coef[ii], coef[jj]))
M
Kn = np.array(K).astype(np.float64)
Mn = np.array(M).astype(np.float64)
vals, vecs = eigh(Kn, Mn, eigvals=(0, m-1))
np.sqrt(vals)
Explanation: Bubnov-Galerkin method
The Bubnov-Galerkint methood is a generalization of the Ritz method. As in the Ritz method, it seeks an approximate solution as a linear combination of base functions
$$u_N = \sum_{i=1}^{N} c_i \phi_i\, .$$
In this case, the coefficients $c_i$ are determined from the condition that the residual $R_N$ is orthogonal to the basis functions $\phi_1, \phi_2, \cdots, \phi_N$:
$$\langle R_N, \phi_k\rangle =0\quad k=1, 2, \cdots, N \, .$$
If the operator is positive-definite then $A=T^*T$ and the problem can be rewritten identically to the Ritz method.
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open('../styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: The following cell change the style of the notebook.
End of explanation |
596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Most of the public Colour API is available from the colour namespace.
Step1: For example, computing the CIE 2017 Colour Fidelity Index of a light source can be done as follows
Step2: The correlated colour temperature of a CIE Standard Illuminant can be calculated easily
Step3: Colour also implements various plotting functions via to Matplotlib | Python Code:
import colour
Explanation: Most of the public Colour API is available from the colour namespace.
End of explanation
sd = colour.SDS_ILLUMINANTS.get('FL2')
colour.colour_fidelity_index(sd)
Explanation: For example, computing the CIE 2017 Colour Fidelity Index of a light source can be done as follows:
End of explanation
il = colour.CCS_ILLUMINANTS['CIE 1931 2 Degree Standard Observer']['D50']
colour.xy_to_CCT(il, method='Hernandez 1999')
Explanation: The correlated colour temperature of a CIE Standard Illuminant can be calculated easily:
End of explanation
colour.plotting.colour_style()
colour.plotting.plot_visible_spectrum();
Explanation: Colour also implements various plotting functions via to Matplotlib:
End of explanation |
597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
San Diego Burrito Analytics
Step1: Load data
Step2: Process
Step3: Process Cali burrito data | Python Code:
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_style("white")
Explanation: San Diego Burrito Analytics: Coordinates
Determine the longitude and latitude for each restaurant based on its address
Default imports
End of explanation
import util2
df, dfRestaurants, dfIngredients = util2.load_burritos()
N = df.shape[0]
Explanation: Load data
End of explanation
dfRestaurants=dfRestaurants.reset_index().drop('index',axis=1)
dfRestaurants
Explanation: Process
End of explanation
dfAvg = df.groupby('Location').agg({'Cost': np.mean,'Volume': np.mean,'Hunger': np.mean,
'Tortilla': np.mean,'Temp': np.mean,'Meat': np.mean,
'Fillings': np.mean,'Meat:filling': np.mean,'Uniformity': np.mean,
'Salsa': np.mean,'Synergy': np.mean,'Wrap': np.mean,
'overall': np.mean, 'Location':np.size})
dfAvg.rename(columns={'Location': 'N'}, inplace=True)
dfAvg['Location'] = list(dfAvg.index)
# Calculate latitutude and longitude for each city
import geocoder
addresses = dfRestaurants['Address'] + ', San Diego, CA'
lats = np.zeros(len(addresses))
longs = np.zeros(len(addresses))
for i, address in enumerate(addresses):
g = geocoder.google(address)
Ntries = 1
while g.latlng ==[]:
g = geocoder.google(address)
Ntries +=1
print 'try again: ' + address
if Ntries >= 5:
if 'Marshall College' in address:
address = '9500 Gilman Drive, La Jolla, CA'
g = geocoder.google(address)
Ntries = 1
while g.latlng ==[]:
g = geocoder.google(address)
Ntries +=1
print 'try again: ' + address
if Ntries >= 5:
raise ValueError('Address not found: ' + address)
else:
raise ValueError('Address not found: ' + address)
lats[i], longs[i] = g.latlng
# # Check for nonsense lats and longs
if sum(np.logical_or(lats>34,lats<32)):
raise ValueError('Address not in san diego')
if sum(np.logical_or(longs<-118,longs>-117)):
raise ValueError('Address not in san diego')
# Incorporate lats and longs into restaurants data
dfRestaurants['Latitude'] = lats
dfRestaurants['Longitude'] = longs
# Merge restaurant data with burrito data
dfTableau = pd.merge(dfRestaurants,dfAvg,on='Location')
dfTableau.head()
dfTableau.to_csv('df_burrito_tableau.csv')
Explanation: Process Cali burrito data: Averages for each restaurant
End of explanation |
598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
python 3.0以后, reduce已经不在built-in function里了, 要用它就得from functools import reduce.
reduce函数即为化简,它是这样一个过程:每次迭代,将上一次的迭代结果(第一次时为init的元素,如没有init则为seq的第一个元素)与下一个元素一同执行一个二元的func函数。在reduce函数中,init是可选的,如果使用,则作为第一次迭代的第一个元素使用。
Step1: 简单来说,可以用这样一个形象化的式子来说明:
Step2: 下面是reduce函数的工作过程图:
<img src="http | Python Code:
格式:reduce( func, seq[, init] )
Explanation: python 3.0以后, reduce已经不在built-in function里了, 要用它就得from functools import reduce.
reduce函数即为化简,它是这样一个过程:每次迭代,将上一次的迭代结果(第一次时为init的元素,如没有init则为seq的第一个元素)与下一个元素一同执行一个二元的func函数。在reduce函数中,init是可选的,如果使用,则作为第一次迭代的第一个元素使用。
End of explanation
reduce( func, [1, 2, 3] ) = func( func(1, 2), 3)
Explanation: 简单来说,可以用这样一个形象化的式子来说明:
End of explanation
from functools import reduce
n = 5
print('{}'reduce(lambda x, y: x * y, range(1, n + 1))) # 1 * 2 * 3 * 4 * 5
Explanation: 下面是reduce函数的工作过程图:
<img src="http://www.pythoner.com/wp-content/uploads/2013/01/reduce.png" />
举个例子来说,阶乘是一个常见的数学方法,Python中并没有给出一个阶乘的内建函数,我们可以使用reduce实现一个阶乘的代码。
End of explanation |
599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI Pipelines
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the latest GA version of google-cloud-pipeline-components library as well.
Step3: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step4: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
Step5: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step6: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step7: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step8: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step9: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
Step13: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
Step14: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step15: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines
Step16: Additional imports.
Step17: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step18: Define AutoML text classification model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
Create and deploy an AutoML text classification Model resource using a Dataset resource.
Step19: Compile the pipeline
Next, compile the pipeline.
Step20: Run the pipeline
Next, run the pipeline.
Step21: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex AI Pipelines: AutoML text classification pipelines using google-cloud-pipeline-components
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_automl_text.ipynb">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This notebook shows how to use the components defined in google_cloud_pipeline_components to build an AutoML text classification workflow on Vertex AI Pipelines.
Dataset
The dataset used for this tutorial is the Happy Moments dataset from Kaggle Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Objective
In this tutorial, you create an AutoML text classification using a pipeline with components from google_cloud_pipeline_components.
The steps performed include:
Create a Dataset resource.
Train an AutoML Model resource.
Creates an Endpoint resource.
Deploys the Model resource to the Endpoint resource.
The components are documented here.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3.
Activate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.
Run jupyter notebook on the command line in a terminal shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex AI SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
! pip3 install $USER kfp google-cloud-pipeline-components --upgrade
Explanation: Install the latest GA version of google-cloud-pipeline-components library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
! python3 -c "import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))"
Explanation: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
PIPELINE_ROOT = "{}/pipeline_root/happydb".format(BUCKET_NAME)
Explanation: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines:
End of explanation
import kfp
Explanation: Additional imports.
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
IMPORT_FILE = "gs://cloud-ml-data/NL-classification/happiness.csv"
@kfp.dsl.pipeline(name="automl-text-classification" + TIMESTAMP)
def pipeline(
project: str = PROJECT_ID, region: str = REGION, import_file: str = IMPORT_FILE
):
from google_cloud_pipeline_components import aiplatform as gcc_aip
from google_cloud_pipeline_components.v1.endpoint import (EndpointCreateOp,
ModelDeployOp)
dataset_create_task = gcc_aip.TextDatasetCreateOp(
display_name="train-automl-happydb",
gcs_source=import_file,
import_schema_uri=aip.schema.dataset.ioformat.text.multi_label_classification,
project=project,
)
training_run_task = gcc_aip.AutoMLTextTrainingJobRunOp(
dataset=dataset_create_task.outputs["dataset"],
display_name="train-automl-happydb",
prediction_type="classification",
multi_label=True,
training_fraction_split=0.6,
validation_fraction_split=0.2,
test_fraction_split=0.2,
model_display_name="train-automl-happydb",
project=project,
)
endpoint_op = EndpointCreateOp(
project=project,
location=region,
display_name="train-automl-flowers",
)
ModelDeployOp(
model=training_run_task.outputs["model"],
endpoint=endpoint_op.outputs["endpoint"],
automatic_resources_min_replica_count=1,
automatic_resources_max_replica_count=1,
)
Explanation: Define AutoML text classification model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
Create and deploy an AutoML text classification Model resource using a Dataset resource.
End of explanation
from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="text classification_pipeline.json".replace(" ", "_"),
)
Explanation: Compile the pipeline
Next, compile the pipeline.
End of explanation
DISPLAY_NAME = "happydb_" + TIMESTAMP
job = aip.PipelineJob(
display_name=DISPLAY_NAME,
template_path="text classification_pipeline.json".replace(" ", "_"),
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
job.run()
! rm text_classification_pipeline.json
Explanation: Run the pipeline
Next, run the pipeline.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
try:
if delete_model and "DISPLAY_NAME" in globals():
models = aip.Model.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
model = models[0]
aip.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
try:
if delete_endpoint and "DISPLAY_NAME" in globals():
endpoints = aip.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
aip.Endpoint.delete(endpoint.resource_name)
print("Deleted endpoint:", endpoint)
except Exception as e:
print(e)
if delete_dataset and "DISPLAY_NAME" in globals():
if "text" == "tabular":
try:
datasets = aip.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TabularDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "text" == "image":
try:
datasets = aip.ImageDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.ImageDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "text" == "text":
try:
datasets = aip.TextDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TextDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "text" == "video":
try:
datasets = aip.VideoDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.VideoDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
try:
if delete_pipeline and "DISPLAY_NAME" in globals():
pipelines = aip.PipelineJob.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
aip.PipelineJob.delete(pipeline.resource_name)
print("Deleted pipeline:", pipeline)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running:
<a href="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" width="40%"/></a> -->
In the UI, many of the pipeline DAG nodes will expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).
<a href="https://storage.googleapis.com/amy-jo/images/mp/automl_text_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_text_classif.png" width="40%"/></a>
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial -- Note: this is auto-generated and not all resources may be applicable for this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |