Datasets:
Tasks:
Text2Text Generation
Modalities:
Text
Formats:
json
Languages:
code
Size:
1K - 10K
Tags:
code-generation
License:
prompt
stringlengths 105
4.73k
| reference_code
stringlengths 11
774
| metadata
dict | code_context
stringlengths 746
120k
|
---|---|---|---|
Problem:
I have the following DataFrame:
Col1 Col2 Col3 Type
0 1 2 3 1
1 4 5 6 1
2 7 8 9 2
3 10 11 12 2
4 13 14 15 3
5 16 17 18 3
The DataFrame is read from a CSV file. All rows which have Type 1 are on top, followed by the rows with Type 2, followed by the rows with Type 3, etc.
I would like to shuffle the order of the DataFrame's rows according to a list. \
For example, give a list [2, 4, 0, 3, 1, 5] and desired result should be:
Col1 Col2 Col3 Type
2 7 8 9 2
4 13 14 15 3
0 1 2 3 1
3 10 11 12 2
1 4 5 6 1
5 16 17 18 3
...
How can I achieve this?
A:
<code>
import pandas as pd
import numpy as np
df = pd.DataFrame({'Col1': [1, 4, 7, 10, 13, 16],
'Col2': [2, 5, 8, 11, 14, 17],
'Col3': [3, 6, 9, 12, 15, 18],
'Type': [1, 1, 2, 2, 3, 3]})
List = np.random.permutation(len(df))
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df, List):
return df.iloc[List]
result = g(df.copy(), List)
| {
"problem_id": 0,
"library_problem_id": 0,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Origin",
"perturbation_origin_id": 0
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, List = data
return df.iloc[List]
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"Col1": [1, 4, 7, 10, 13, 16],
"Col2": [2, 5, 8, 11, 14, 17],
"Col3": [3, 6, 9, 12, 15, 18],
"Type": [1, 1, 2, 2, 3, 3],
}
)
List = np.random.permutation(len(df))
return df, List
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, List = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have the following DataFrame:
Col1 Col2 Col3 Type
0 1 2 3 1
1 4 5 6 1
2 7 8 9 2
3 10 11 12 2
4 13 14 15 3
5 16 17 18 3
The DataFrame is read from a CSV file. All rows which have Type 1 are on top, followed by the rows with Type 2, followed by the rows with Type 3, etc.
I would like to shuffle the order of the DataFrame's rows according to a list.
For example, give a list [2, 4, 0, 3, 1, 5] and desired DataFrame should be:
Col1 Col2 Col3 Type
2 7 8 9 2
4 13 14 15 3
0 1 2 3 1
3 10 11 12 2
1 4 5 6 1
5 16 17 18 3
...
I want to know how many rows have different Type than the original DataFrame. In this case, 4 rows (0,1,2,4) have different Type than origin.
How can I achieve this?
A:
<code>
import pandas as pd
import numpy as np
df = pd.DataFrame({'Col1': [1, 4, 7, 10, 13, 16],
'Col2': [2, 5, 8, 11, 14, 17],
'Col3': [3, 6, 9, 12, 15, 18],
'Type': [1, 1, 2, 2, 3, 3]})
List = np.random.permutation(len(df))
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df, List):
df2 = df.iloc[List].reindex().reset_index(drop=True)
return (df2.Type != df.Type).sum()
result = g(df.copy(), List)
| {
"problem_id": 1,
"library_problem_id": 1,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 0
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, List = data
df2 = df.iloc[List].reindex().reset_index(drop=True)
return (df2.Type != df.Type).sum()
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"Col1": [1, 4, 7, 10, 13, 16],
"Col2": [2, 5, 8, 11, 14, 17],
"Col3": [3, 6, 9, 12, 15, 18],
"Type": [1, 1, 2, 2, 3, 3],
}
)
List = np.random.permutation(len(df))
return df, List
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
assert result == ans
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, List = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have following pandas dataframe :
import pandas as pd
from pandas import Series, DataFrame
data = DataFrame({'Qu1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'sausage', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'egg']})
I'd like to change values in columns Qu1,Qu2,Qu3 according to value_counts() when value count great or equal 2
For example for Qu1 column
>>> pd.value_counts(data.Qu1) >= 2
cheese True
potato True
banana True
apple False
egg False
I'd like to keep values cheese,potato,banana, because each value has at least two appearances.
From values apple and egg I'd like to create value others
For column Qu2 no changes :
>>> pd.value_counts(data.Qu2) >= 2
banana True
apple True
sausage True
The final result as in attached test_data
test_data = DataFrame({'Qu1': ['other', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'other'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['other', 'potato', 'other', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'other']})
Thanks !
A:
<code>
import pandas as pd
df = pd.DataFrame({'Qu1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'sausage', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'egg']})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.where(df.apply(lambda x: x.map(x.value_counts())) >= 2, "other")
result = g(df.copy())
| {
"problem_id": 2,
"library_problem_id": 2,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 2
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.where(df.apply(lambda x: x.map(x.value_counts())) >= 2, "other")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"Qu1": [
"apple",
"potato",
"cheese",
"banana",
"cheese",
"banana",
"cheese",
"potato",
"egg",
],
"Qu2": [
"sausage",
"banana",
"apple",
"apple",
"apple",
"sausage",
"banana",
"banana",
"banana",
],
"Qu3": [
"apple",
"potato",
"sausage",
"cheese",
"cheese",
"potato",
"cheese",
"potato",
"egg",
],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"Qu1": [
"sausage",
"banana",
"apple",
"apple",
"apple",
"sausage",
"banana",
"banana",
"banana",
],
"Qu2": [
"apple",
"potato",
"sausage",
"cheese",
"cheese",
"potato",
"cheese",
"potato",
"egg",
],
"Qu3": [
"apple",
"potato",
"cheese",
"banana",
"cheese",
"banana",
"cheese",
"potato",
"egg",
],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have following pandas dataframe :
import pandas as pd
from pandas import Series, DataFrame
data = DataFrame({'Qu1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'sausage', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'egg']})
I'd like to change values in columns Qu1,Qu2,Qu3 according to value_counts() when value count great or equal 3
For example for Qu1 column
>>> pd.value_counts(data.Qu1) >= 3
cheese True
potato False
banana False
apple False
egg False
I'd like to keep values cheese, because each value has at least three appearances.
From values potato, banana, apple and egg I'd like to create value others
For column Qu2 no changes :
>>> pd.value_counts(data.Qu2) >= 3
banana True
apple True
sausage False
The final result as in attached test_data
test_data = DataFrame({'Qu1': ['other', 'other', 'cheese', 'other', 'cheese', 'other', 'cheese', 'other', 'other'],
'Qu2': ['other', 'banana', 'apple', 'apple', 'apple', 'other', 'banana', 'banana', 'banana'],
'Qu3': ['other', 'potato', 'other', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'other']})
Thanks !
A:
<code>
import pandas as pd
df = pd.DataFrame({'Qu1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'sausage', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'egg']})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.where(df.apply(lambda x: x.map(x.value_counts())) >= 3, "other")
result = g(df.copy())
| {
"problem_id": 3,
"library_problem_id": 3,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Semantic",
"perturbation_origin_id": 2
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.where(df.apply(lambda x: x.map(x.value_counts())) >= 3, "other")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"Qu1": [
"apple",
"potato",
"cheese",
"banana",
"cheese",
"banana",
"cheese",
"potato",
"egg",
],
"Qu2": [
"sausage",
"banana",
"apple",
"apple",
"apple",
"sausage",
"banana",
"banana",
"banana",
],
"Qu3": [
"apple",
"potato",
"sausage",
"cheese",
"cheese",
"potato",
"cheese",
"potato",
"egg",
],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"Qu1": [
"sausage",
"banana",
"apple",
"apple",
"apple",
"sausage",
"banana",
"banana",
"banana",
],
"Qu2": [
"apple",
"potato",
"sausage",
"cheese",
"cheese",
"potato",
"cheese",
"potato",
"egg",
],
"Qu3": [
"apple",
"potato",
"cheese",
"banana",
"cheese",
"banana",
"cheese",
"potato",
"egg",
],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have following pandas dataframe :
import pandas as pd
from pandas import Series, DataFrame
data = DataFrame({'Qu1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'sausage', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'egg']})
I'd like to change values in columns Qu1,Qu2,Qu3 according to value_counts() when value count great or equal 2
For example for Qu1 column
>>> pd.value_counts(data.Qu1) >= 2
cheese True
potato True
banana True
apple False
egg False
I'd like to keep values cheese,potato,banana, because each value has at least two appearances.
From values apple and egg I'd like to create value others
For column Qu2 no changes :
>>> pd.value_counts(data.Qu2) >= 2
banana True
apple True
sausage True
The final result as in attached test_data
test_data = DataFrame({'Qu1': ['other', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'other'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['other', 'potato', 'other', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'other']})
Thanks !
A:
<code>
import pandas as pd
example_df = pd.DataFrame({'Qu1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'sausage', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'egg']})
def f(df=example_df):
# return the solution in this function
# result = f(df)
### BEGIN SOLUTION | result = df.where(df.apply(lambda x: x.map(x.value_counts())) >= 2, "other")
return result
| {
"problem_id": 4,
"library_problem_id": 4,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Surface",
"perturbation_origin_id": 2
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.where(df.apply(lambda x: x.map(x.value_counts())) >= 2, "other")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"Qu1": [
"apple",
"potato",
"cheese",
"banana",
"cheese",
"banana",
"cheese",
"potato",
"egg",
],
"Qu2": [
"sausage",
"banana",
"apple",
"apple",
"apple",
"sausage",
"banana",
"banana",
"banana",
],
"Qu3": [
"apple",
"potato",
"sausage",
"cheese",
"cheese",
"potato",
"cheese",
"potato",
"egg",
],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"Qu1": [
"sausage",
"banana",
"apple",
"apple",
"apple",
"sausage",
"banana",
"banana",
"banana",
],
"Qu2": [
"apple",
"potato",
"sausage",
"cheese",
"cheese",
"potato",
"cheese",
"potato",
"egg",
],
"Qu3": [
"apple",
"potato",
"cheese",
"banana",
"cheese",
"banana",
"cheese",
"potato",
"egg",
],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
def f(df):
[insert]
df = test_input
result = f(df)
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have following pandas dataframe :
import pandas as pd
from pandas import Series, DataFrame
data = DataFrame({'Qu1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'sausage', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'egg']})
I'd like to change values in columns Qu1 according to value_counts() when value count great or equal 3 and change values in columns Qu2 and Qu3 according to value_counts() when value count great or equal 2.
For example for Qu1 column
>>> pd.value_counts(data.Qu1) >= 3
cheese True
potato False
banana False
apple False
egg False
I'd like to keep values cheese, because each value has at least three appearances.
From values potato, banana, apple and egg I'd like to create value others
For column Qu2 no changes :
>>> pd.value_counts(data.Qu2) >= 2
banana True
apple True
sausage True
The final result as in attached test_data
test_data = DataFrame({'Qu1': ['other', 'other', 'cheese', 'other', 'cheese', 'other', 'cheese', 'other', 'other'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['other', 'potato', 'other', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'other']})
Thanks !
A:
<code>
import pandas as pd
df = pd.DataFrame({'Qu1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'sausage', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'egg']})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
for col in df.columns:
vc = df[col].value_counts()
if col == 'Qu1':
df[col] = df[col].apply(lambda x: x if vc[x] >= 3 else 'other')
else:
df[col] = df[col].apply(lambda x: x if vc[x] >= 2 else 'other')
return df
result = g(df.copy())
| {
"problem_id": 5,
"library_problem_id": 5,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 2
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
for col in df.columns:
vc = df[col].value_counts()
if col == "Qu1":
df[col] = df[col].apply(lambda x: x if vc[x] >= 3 else "other")
else:
df[col] = df[col].apply(lambda x: x if vc[x] >= 2 else "other")
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"Qu1": [
"apple",
"potato",
"cheese",
"banana",
"cheese",
"banana",
"cheese",
"potato",
"egg",
],
"Qu2": [
"sausage",
"banana",
"apple",
"apple",
"apple",
"sausage",
"banana",
"banana",
"banana",
],
"Qu3": [
"apple",
"potato",
"sausage",
"cheese",
"cheese",
"potato",
"cheese",
"potato",
"egg",
],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"Qu1": [
"sausage",
"banana",
"apple",
"apple",
"apple",
"sausage",
"banana",
"banana",
"banana",
],
"Qu2": [
"apple",
"potato",
"sausage",
"cheese",
"cheese",
"potato",
"cheese",
"potato",
"egg",
],
"Qu3": [
"apple",
"potato",
"cheese",
"banana",
"cheese",
"banana",
"cheese",
"potato",
"egg",
],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have following pandas dataframe :
import pandas as pd
from pandas import Series, DataFrame
data = DataFrame({'Qu1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'sausage', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'egg']})
I'd like to change values in columns Qu1 according to value_counts() when value count great or equal 3 and change values in columns Qu2 and Qu3 according to value_counts() when value count great or equal 2.
For example for Qu1 column
>>> pd.value_counts(data.Qu1) >= 3
cheese True
potato False
banana False
apple False
egg False
I'd like to keep values cheese because each value has at least three appearances.
From values potato, banana, apple and egg I'd like to create value others
However I want to reserve all the 'apple'. That means don't replace 'apple' with 'other' and only 'egg' should be replaced.
For column Qu2 no changes :
>>> pd.value_counts(data.Qu2) >= 2
banana True
apple True
sausage True
The final result as in attached test_data
test_data = DataFrame({'Qu1': ['apple', 'other', 'cheese', 'other', 'cheese', 'other', 'cheese', 'other', 'other'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'other', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'other']})
Thanks !
A:
<code>
import pandas as pd
df = pd.DataFrame({'Qu1': ['apple', 'potato', 'cheese', 'banana', 'cheese', 'banana', 'cheese', 'potato', 'egg'],
'Qu2': ['sausage', 'banana', 'apple', 'apple', 'apple', 'sausage', 'banana', 'banana', 'banana'],
'Qu3': ['apple', 'potato', 'sausage', 'cheese', 'cheese', 'potato', 'cheese', 'potato', 'egg']})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
for col in df.columns:
vc = df[col].value_counts()
if col == 'Qu1':
df[col] = df[col].apply(lambda x: x if vc[x] >= 3 or x == 'apple' else 'other')
else:
df[col] = df[col].apply(lambda x: x if vc[x] >= 2 or x == 'apple' else 'other')
return df
result = g(df.copy())
| {
"problem_id": 6,
"library_problem_id": 6,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 2
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
for col in df.columns:
vc = df[col].value_counts()
if col == "Qu1":
df[col] = df[col].apply(
lambda x: x if vc[x] >= 3 or x == "apple" else "other"
)
else:
df[col] = df[col].apply(
lambda x: x if vc[x] >= 2 or x == "apple" else "other"
)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"Qu1": [
"apple",
"potato",
"cheese",
"banana",
"cheese",
"banana",
"cheese",
"potato",
"egg",
],
"Qu2": [
"sausage",
"banana",
"apple",
"apple",
"apple",
"sausage",
"banana",
"banana",
"banana",
],
"Qu3": [
"apple",
"potato",
"sausage",
"cheese",
"cheese",
"potato",
"cheese",
"potato",
"egg",
],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"Qu1": [
"sausage",
"banana",
"apple",
"apple",
"apple",
"sausage",
"banana",
"banana",
"banana",
],
"Qu2": [
"apple",
"potato",
"sausage",
"cheese",
"cheese",
"potato",
"cheese",
"potato",
"egg",
],
"Qu3": [
"apple",
"potato",
"cheese",
"banana",
"cheese",
"banana",
"cheese",
"potato",
"egg",
],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a dataset :
id url keep_if_dup
1 A.com Yes
2 A.com Yes
3 B.com No
4 B.com No
5 C.com No
I want to remove duplicates, i.e. keep first occurence of "url" field, BUT keep duplicates if the field "keep_if_dup" is YES.
Expected output :
id url keep_if_dup
1 A.com Yes
2 A.com Yes
3 B.com No
5 C.com No
What I tried :
Dataframe=Dataframe.drop_duplicates(subset='url', keep='first')
which of course does not take into account "keep_if_dup" field. Output is :
id url keep_if_dup
1 A.com Yes
3 B.com No
5 C.com No
A:
<code>
import pandas as pd
df = pd.DataFrame({'url': ['A.com', 'A.com', 'A.com', 'B.com', 'B.com', 'C.com', 'B.com'],
'keep_if_dup': ['Yes', 'Yes', 'No', 'No', 'No', 'No', 'Yes']})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.loc[(df['keep_if_dup'] =='Yes') | ~df['url'].duplicated()]
result = g(df.copy())
| {
"problem_id": 7,
"library_problem_id": 7,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Origin",
"perturbation_origin_id": 7
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.loc[(df["keep_if_dup"] == "Yes") | ~df["url"].duplicated()]
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"url": [
"A.com",
"A.com",
"A.com",
"B.com",
"B.com",
"C.com",
"B.com",
],
"keep_if_dup": ["Yes", "Yes", "No", "No", "No", "No", "Yes"],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a dataset :
id url drop_if_dup
1 A.com Yes
2 A.com Yes
3 B.com No
4 B.com No
5 C.com No
I want to remove duplicates, i.e. keep first occurence of "url" field, BUT keep duplicates if the field "drop_if_dup" is No.
Expected output :
id url drop_if_dup
1 A.com Yes
3 B.com No
4 B.com No
5 C.com No
What I tried :
Dataframe=Dataframe.drop_duplicates(subset='url', keep='first')
which of course does not take into account "drop_if_dup" field. Output is :
id url drop_if_dup
1 A.com Yes
3 B.com No
5 C.com No
A:
<code>
import pandas as pd
df = pd.DataFrame({'url': ['A.com', 'A.com', 'A.com', 'B.com', 'B.com', 'C.com', 'B.com'],
'drop_if_dup': ['Yes', 'Yes', 'No', 'No', 'No', 'No', 'Yes']})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.loc[(df['drop_if_dup'] =='No') | ~df['url'].duplicated()]
result = g(df.copy())
| {
"problem_id": 8,
"library_problem_id": 8,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Semantic",
"perturbation_origin_id": 7
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.loc[(df["drop_if_dup"] == "No") | ~df["url"].duplicated()]
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"url": [
"A.com",
"A.com",
"A.com",
"B.com",
"B.com",
"C.com",
"B.com",
],
"drop_if_dup": ["Yes", "Yes", "No", "No", "No", "No", "Yes"],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a dataset :
id url keep_if_dup
1 A.com Yes
2 A.com Yes
3 B.com No
4 B.com No
5 C.com No
I want to remove duplicates, i.e. keep last occurence of "url" field, BUT keep duplicates if the field "keep_if_dup" is YES.
Expected output :
id url keep_if_dup
1 A.com Yes
2 A.com Yes
4 B.com No
5 C.com No
What I tried :
Dataframe=Dataframe.drop_duplicates(subset='url', keep='first')
which of course does not take into account "keep_if_dup" field. Output is :
id url keep_if_dup
1 A.com Yes
3 B.com No
5 C.com No
A:
<code>
import pandas as pd
df = pd.DataFrame({'url': ['A.com', 'A.com', 'A.com', 'B.com', 'B.com', 'C.com', 'B.com'],
'keep_if_dup': ['Yes', 'Yes', 'No', 'No', 'No', 'No', 'Yes']})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.loc[(df['keep_if_dup'] =='Yes') | ~df['url'].duplicated(keep='last')]
result = g(df.copy())
| {
"problem_id": 9,
"library_problem_id": 9,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 7
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.loc[(df["keep_if_dup"] == "Yes") | ~df["url"].duplicated(keep="last")]
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"url": [
"A.com",
"A.com",
"A.com",
"B.com",
"B.com",
"C.com",
"B.com",
],
"keep_if_dup": ["Yes", "Yes", "No", "No", "No", "No", "Yes"],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I'm Looking for a generic way of turning a DataFrame to a nested dictionary
This is a sample data frame
name v1 v2 v3
0 A A1 A11 1
1 A A2 A12 2
2 B B1 B12 3
3 C C1 C11 4
4 B B2 B21 5
5 A A2 A21 6
The number of columns may differ and so does the column names.
like this :
{
'A' : {
'A1' : { 'A11' : 1 }
'A2' : { 'A12' : 2 , 'A21' : 6 }} ,
'B' : {
'B1' : { 'B12' : 3 } } ,
'C' : {
'C1' : { 'C11' : 4}}
}
What is best way to achieve this ?
closest I got was with the zip function but haven't managed to make it work for more then one level (two columns).
A:
<code>
import pandas as pd
df = pd.DataFrame({'name': ['A', 'A', 'B', 'C', 'B', 'A'],
'v1': ['A1', 'A2', 'B1', 'C1', 'B2', 'A2'],
'v2': ['A11', 'A12', 'B12', 'C11', 'B21', 'A21'],
'v3': [1, 2, 3, 4, 5, 6]})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
if len(df.columns) == 1:
if df.values.size == 1: return df.values[0][0]
return df.values.squeeze()
grouped = df.groupby(df.columns[0])
d = {k: g(t.iloc[:, 1:]) for k, t in grouped}
return d
result = g(df.copy())
| {
"problem_id": 10,
"library_problem_id": 10,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Origin",
"perturbation_origin_id": 10
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
if len(df.columns) == 1:
if df.values.size == 1:
return df.values[0][0]
return df.values.squeeze()
grouped = df.groupby(df.columns[0])
d = {k: generate_ans(t.iloc[:, 1:]) for k, t in grouped}
return d
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"name": ["A", "A", "B", "C", "B", "A"],
"v1": ["A1", "A2", "B1", "C1", "B2", "A2"],
"v2": ["A11", "A12", "B12", "C11", "B21", "A21"],
"v3": [1, 2, 3, 4, 5, 6],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
assert result == ans
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have been struggling with removing the time zone info from a column in a pandas dataframe. I have checked the following question, but it does not work for me:
Can I export pandas DataFrame to Excel stripping tzinfo?
I used tz_localize to assign a timezone to a datetime object, because I need to convert to another timezone using tz_convert. This adds an UTC offset, in the way "-06:00". I need to get rid of this offset, because it results in an error when I try to export the dataframe to Excel.
Actual output
2015-12-01 00:00:00-06:00
Desired output
2015-12-01 00:00:00
I have tried to get the characters I want using the str() method, but it seems the result of tz_localize is not a string. My solution so far is to export the dataframe to csv, read the file, and to use the str() method to get the characters I want.
Is there an easier solution?
A:
<code>
import pandas as pd
df = pd.DataFrame({'datetime': ['2015-12-01 00:00:00-06:00', '2015-12-02 00:01:00-06:00', '2015-12-03 00:00:00-06:00']})
df['datetime'] = pd.to_datetime(df['datetime'])
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| df['datetime'] = df['datetime'].dt.tz_localize(None)
| {
"problem_id": 11,
"library_problem_id": 11,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 11
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["datetime"] = df["datetime"].dt.tz_localize(None)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"datetime": [
"2015-12-01 00:00:00-06:00",
"2015-12-02 00:01:00-06:00",
"2015-12-03 00:00:00-06:00",
]
}
)
df["datetime"] = pd.to_datetime(df["datetime"])
elif test_case_id == 2:
df = pd.DataFrame(
{
"datetime": [
"2016-12-02 00:01:00-06:00",
"2016-12-01 00:00:00-06:00",
"2016-12-03 00:00:00-06:00",
]
}
)
df["datetime"] = pd.to_datetime(df["datetime"])
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "tz_localize" in tokens
|
Problem:
I have been struggling with removing the time zone info from a column in a pandas dataframe. I have checked the following question, but it does not work for me:
Can I export pandas DataFrame to Excel stripping tzinfo?
I used tz_localize to assign a timezone to a datetime object, because I need to convert to another timezone using tz_convert. This adds an UTC offset, in the way "-06:00". I need to get rid of this offset, because it results in an error when I try to export the dataframe to Excel.
Actual output
2015-12-01 00:00:00-06:00
Desired output
2015-12-01 00:00:00
I have tried to get the characters I want using the str() method, but it seems the result of tz_localize is not a string. My solution so far is to export the dataframe to csv, read the file, and to use the str() method to get the characters I want.
Is there an easier solution?
A:
<code>
import pandas as pd
example_df = pd.DataFrame({'datetime': ['2015-12-01 00:00:00-06:00', '2015-12-02 00:01:00-06:00', '2015-12-03 00:00:00-06:00']})
example_df['datetime'] = pd.to_datetime(example_df['datetime'])
def f(df=example_df):
# return the solution in this function
# result = f(df)
### BEGIN SOLUTION | df['datetime'] = df['datetime'].dt.tz_localize(None)
result = df
return result
| {
"problem_id": 12,
"library_problem_id": 12,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Surface",
"perturbation_origin_id": 11
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["datetime"] = df["datetime"].dt.tz_localize(None)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"datetime": [
"2015-12-01 00:00:00-06:00",
"2015-12-02 00:01:00-06:00",
"2015-12-03 00:00:00-06:00",
]
}
)
df["datetime"] = pd.to_datetime(df["datetime"])
elif test_case_id == 2:
df = pd.DataFrame(
{
"datetime": [
"2016-12-02 00:01:00-06:00",
"2016-12-01 00:00:00-06:00",
"2016-12-03 00:00:00-06:00",
]
}
)
df["datetime"] = pd.to_datetime(df["datetime"])
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
def f(df):
[insert]
df = test_input
result = f(df)
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "tz_localize" in tokens
|
Problem:
I have been struggling with removing the time zone info from a column in a pandas dataframe. I have checked the following question, but it does not work for me:
Can I export pandas DataFrame to Excel stripping tzinfo?
I used tz_localize to assign a timezone to a datetime object, because I need to convert to another timezone using tz_convert. This adds an UTC offset, in the way "-06:00". I need to get rid of this offset, because it results in an error when I try to export the dataframe to Excel.
Actual output
2015-12-01 00:00:00-06:00
Desired output
01-Dec-2015 00:00:00
I have tried to get the characters I want using the str() method, but it seems the result of tz_localize is not a string. My solution so far is to export the dataframe to csv, read the file, and to use the str() method to get the characters I want.
Then I want the 'datetime' to go from smallest to largest and let 'datetime' look like this format: 19-May-2016 13:50:00.
Is there an easier solution?
A:
<code>
import pandas as pd
df = pd.DataFrame({'datetime': ['2015-12-01 00:00:00-06:00', '2015-12-02 00:01:00-06:00', '2015-12-03 00:00:00-06:00']})
df['datetime'] = pd.to_datetime(df['datetime'])
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| df['datetime'] = df['datetime'].dt.tz_localize(None)
df.sort_values(by='datetime', inplace=True)
df['datetime'] = df['datetime'].dt.strftime('%d-%b-%Y %T') | {
"problem_id": 13,
"library_problem_id": 13,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 11
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["datetime"] = df["datetime"].dt.tz_localize(None)
df.sort_values(by="datetime", inplace=True)
df["datetime"] = df["datetime"].dt.strftime("%d-%b-%Y %T")
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"datetime": [
"2015-12-01 00:00:00-06:00",
"2015-12-02 00:01:00-06:00",
"2015-12-03 00:00:00-06:00",
]
}
)
df["datetime"] = pd.to_datetime(df["datetime"])
elif test_case_id == 2:
df = pd.DataFrame(
{
"datetime": [
"2016-12-02 00:01:00-06:00",
"2016-12-01 00:00:00-06:00",
"2016-12-03 00:00:00-06:00",
]
}
)
df["datetime"] = pd.to_datetime(df["datetime"])
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "tz_localize" in tokens
|
Problem:
I have been struggling with removing the time zone info from a column in a pandas dataframe. I have checked the following question, but it does not work for me:
Can I export pandas DataFrame to Excel stripping tzinfo?
I used tz_localize to assign a timezone to a datetime object, because I need to convert to another timezone using tz_convert. This adds an UTC offset, in the way "-06:00". I need to get rid of this offset, because it results in an error when I try to export the dataframe to Excel.
Actual output
2015-12-01 00:00:00-06:00
Desired output
2015-12-01 00:00:00
I have tried to get the characters I want using the str() method, but it seems the result of tz_localize is not a string. My solution so far is to export the dataframe to csv, read the file, and to use the str() method to get the characters I want.
Then I want the 'datetime' to go from smallest to largest.
Is there an easier solution?
A:
<code>
import pandas as pd
df = pd.DataFrame({'datetime': ['2015-12-01 00:00:00-06:00', '2015-12-02 00:01:00-06:00', '2015-12-03 00:00:00-06:00']})
df['datetime'] = pd.to_datetime(df['datetime'])
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
df['datetime'] = df['datetime'].dt.tz_localize(None)
df.sort_values(by='datetime', inplace=True)
return df
df = g(df.copy())
| {
"problem_id": 14,
"library_problem_id": 14,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 11
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["datetime"] = df["datetime"].dt.tz_localize(None)
df.sort_values(by="datetime", inplace=True)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"datetime": [
"2015-12-01 00:00:00-06:00",
"2015-12-02 00:01:00-06:00",
"2015-12-03 00:00:00-06:00",
]
}
)
df["datetime"] = pd.to_datetime(df["datetime"])
elif test_case_id == 2:
df = pd.DataFrame(
{
"datetime": [
"2016-12-02 00:01:00-06:00",
"2016-12-01 00:00:00-06:00",
"2016-12-03 00:00:00-06:00",
]
}
)
df["datetime"] = pd.to_datetime(df["datetime"])
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "tz_localize" in tokens
|
Problem:
I have a data set like below:
name status number message
matt active 12345 [job: , money: none, wife: none]
james active 23456 [group: band, wife: yes, money: 10000]
adam inactive 34567 [job: none, money: none, wife: , kids: one, group: jail]
How can I extract the key value pairs, and turn them into a dataframe expanded all the way out?
Expected output:
name status number job money wife group kids
matt active 12345 none none none none none
james active 23456 none 10000 none band none
adam inactive 34567 none none none none one
Notice: 'none' is a string
The message contains multiple different key types.
Any help would be greatly appreciated.
A:
<code>
import pandas as pd
df = pd.DataFrame({'name': ['matt', 'james', 'adam'],
'status': ['active', 'active', 'inactive'],
'number': [12345, 23456, 34567],
'message': ['[job: , money: none, wife: none]',
'[group: band, wife: yes, money: 10000]',
'[job: none, money: none, wife: , kids: one, group: jail]']})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| import yaml
def g(df):
df.message = df.message.replace(['\[','\]'],['{','}'], regex=True).apply(yaml.safe_load)
df1 = pd.DataFrame(df.pop('message').values.tolist(), index=df.index)
result = pd.concat([df, df1], axis=1)
result = result.replace('', 'none')
result = result.replace(np.nan, 'none')
return result
result = g(df.copy()) | {
"problem_id": 15,
"library_problem_id": 15,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 15
} | import pandas as pd
import numpy as np
import yaml
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df.message = df.message.replace(["\[", "\]"], ["{", "}"], regex=True).apply(
yaml.safe_load
)
df1 = pd.DataFrame(df.pop("message").values.tolist(), index=df.index)
result = pd.concat([df, df1], axis=1)
result = result.replace("", "none")
result = result.replace(np.nan, "none")
return result
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"name": ["matt", "james", "adam"],
"status": ["active", "active", "inactive"],
"number": [12345, 23456, 34567],
"message": [
"[job: , money: none, wife: none]",
"[group: band, wife: yes, money: 10000]",
"[job: none, money: none, wife: , kids: one, group: jail]",
],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"name": ["matt", "james", "adam"],
"status": ["active", "active", "inactive"],
"number": [12345, 23456, 34567],
"message": [
"[job: , money: 114514, wife: none, kids: one, group: jail]",
"[group: band, wife: yes, money: 10000]",
"[job: none, money: none, wife: , kids: one, group: jail]",
],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a dataframe that looks like this:
product score
0 1179160 0.424654
1 1066490 0.424509
2 1148126 0.422207
3 1069104 0.420455
4 1069105 0.414603
.. ... ...
491 1160330 0.168784
492 1069098 0.168749
493 1077784 0.168738
494 1193369 0.168703
495 1179741 0.168684
what I'm trying to achieve is to multiply certain score values corresponding to specific products by a constant.
I have the products target of this multiplication in a list like this: [1069104, 1069105] (this is just a simplified
example, in reality it would be more than two products) and my goal is to obtain this:
Multiply scores corresponding to products 1069104 and 1069105 by 10:
product score
0 1179160 0.424654
1 1066490 0.424509
2 1148126 0.422207
3 1069104 4.204550
4 1069105 4.146030
.. ... ...
491 1160330 0.168784
492 1069098 0.168749
493 1077784 0.168738
494 1193369 0.168703
495 1179741 0.168684
I know that exists DataFrame.multiply but checking the examples it works for full columns, and I just one to change those specific values.
A:
<code>
import pandas as pd
df = pd.DataFrame({'product': [1179160, 1066490, 1148126, 1069104, 1069105, 1160330, 1069098, 1077784, 1193369, 1179741],
'score': [0.424654, 0.424509, 0.422207, 0.420455, 0.414603, 0.168784, 0.168749, 0.168738, 0.168703, 0.168684]})
products = [1066490, 1077784]
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| df.loc[df['product'].isin(products), 'score'] *= 10
| {
"problem_id": 16,
"library_problem_id": 16,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 16
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, prod_list = data
df.loc[df["product"].isin(prod_list), "score"] *= 10
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"product": [
1179160,
1066490,
1148126,
1069104,
1069105,
1160330,
1069098,
1077784,
1193369,
1179741,
],
"score": [
0.424654,
0.424509,
0.422207,
0.420455,
0.414603,
0.168784,
0.168749,
0.168738,
0.168703,
0.168684,
],
}
)
products = [1066490, 1077784]
if test_case_id == 2:
df = pd.DataFrame(
{
"product": [
1179160,
1066490,
1148126,
1069104,
1069105,
1160330,
1069098,
1077784,
1193369,
1179741,
],
"score": [
0.424654,
0.424509,
0.422207,
0.420455,
0.414603,
0.168784,
0.168749,
0.168738,
0.168703,
0.168684,
],
}
)
products = [1179741, 1179160]
return df, products
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, products = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a dataframe that looks like this:
product score
0 1179160 0.424654
1 1066490 0.424509
2 1148126 0.422207
3 1069104 0.420455
4 1069105 0.414603
.. ... ...
491 1160330 0.168784
492 1069098 0.168749
493 1077784 0.168738
494 1193369 0.168703
495 1179741 0.168684
what I'm trying to achieve is to multiply certain score values corresponding to specific products by a constant.
I have a list like this: [1069104, 1069105] (this is just a simplified
example, in reality it would be more than two products) and my goal is to obtain this:
Multiply scores not in the list by 10:
product score
0 1179160 4.24654
1 1066490 4.24509
2 1148126 4.22207
3 1069104 0.4204550
4 1069105 0.146030
.. ... ...
491 1160330 1.68784
492 1069098 1.68749
493 1077784 1.68738
494 1193369 1.68703
495 1179741 1.68684
I know that exists DataFrame.multiply but checking the examples it works for full columns, and I just one to change those specific values.
A:
<code>
import pandas as pd
df = pd.DataFrame({'product': [1179160, 1066490, 1148126, 1069104, 1069105, 1160330, 1069098, 1077784, 1193369, 1179741],
'score': [0.424654, 0.424509, 0.422207, 0.420455, 0.414603, 0.168784, 0.168749, 0.168738, 0.168703, 0.168684]})
products = [1066490, 1077784]
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| df.loc[~df['product'].isin(products), 'score'] *= 10
| {
"problem_id": 17,
"library_problem_id": 17,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Semantic",
"perturbation_origin_id": 16
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, prod_list = data
df.loc[~df["product"].isin(prod_list), "score"] *= 10
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"product": [
1179160,
1066490,
1148126,
1069104,
1069105,
1160330,
1069098,
1077784,
1193369,
1179741,
],
"score": [
0.424654,
0.424509,
0.422207,
0.420455,
0.414603,
0.168784,
0.168749,
0.168738,
0.168703,
0.168684,
],
}
)
products = [1066490, 1077784]
if test_case_id == 2:
df = pd.DataFrame(
{
"product": [
1179160,
1066490,
1148126,
1069104,
1069105,
1160330,
1069098,
1077784,
1193369,
1179741,
],
"score": [
0.424654,
0.424509,
0.422207,
0.420455,
0.414603,
0.168784,
0.168749,
0.168738,
0.168703,
0.168684,
],
}
)
products = [1179741, 1179160]
return df, products
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, products = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a dataframe that looks like this:
product score
0 1179160 0.424654
1 1066490 0.424509
2 1148126 0.422207
3 1069104 0.420455
4 1069105 0.414603
.. ... ...
491 1160330 0.168784
492 1069098 0.168749
493 1077784 0.168738
494 1193369 0.168703
495 1179741 0.168684
what I'm trying to achieve is to multiply certain score values corresponding to specific products by a constant.
I have the products target of this multiplication in a list like this: [[1069104, 1069105], [1179159, 1179161]] (this is just a simplified
example, in reality it would be more than two products) and my goal is to obtain this:
Multiply scores corresponding to products which between [1069104, 1069105] or [1179159, 1179161] by 10:
product score
0 1179160 4.24654
1 1066490 0.424509
2 1148126 0.422207
3 1069104 4.204550
4 1069105 4.146030
.. ... ...
491 1160330 0.168784
492 1069098 0.168749
493 1077784 0.168738
494 1193369 0.168703
495 1179741 0.168684
I know that exists DataFrame.multiply but checking the examples it works for full columns, and I just one to change those specific values.
A:
<code>
import pandas as pd
df = pd.DataFrame({'product': [1179160, 1066490, 1148126, 1069104, 1069105, 1160330, 1069098, 1077784, 1193369, 1179741],
'score': [0.424654, 0.424509, 0.422207, 0.420455, 0.414603, 0.168784, 0.168749, 0.168738, 0.168703, 0.168684]})
products = [[1069104, 1069105], [1066489, 1066491]]
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| for product in products:
df.loc[(df['product'] >= product[0]) & (df['product'] <= product[1]), 'score'] *= 10
| {
"problem_id": 18,
"library_problem_id": 18,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 16
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, prod_list = data
for product in prod_list:
df.loc[
(df["product"] >= product[0]) & (df["product"] <= product[1]), "score"
] *= 10
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"product": [
1179160,
1066490,
1148126,
1069104,
1069105,
1160330,
1069098,
1077784,
1193369,
1179741,
],
"score": [
0.424654,
0.424509,
0.422207,
0.420455,
0.414603,
0.168784,
0.168749,
0.168738,
0.168703,
0.168684,
],
}
)
products = [[1069104, 1069105], [1066489, 1066491]]
if test_case_id == 2:
df = pd.DataFrame(
{
"product": [
1179160,
1066490,
1148126,
1069104,
1069105,
1160330,
1069098,
1077784,
1193369,
1179741,
],
"score": [
0.424654,
0.424509,
0.422207,
0.420455,
0.414603,
0.168784,
0.168749,
0.168738,
0.168703,
0.168684,
],
}
)
products = [
[1069104, 1069105],
]
return df, products
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, products = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a dataframe that looks like this:
product score
0 1179160 0.424654
1 1066490 0.424509
2 1148126 0.422207
3 1069104 0.420455
4 1069105 0.414603
.. ... ...
491 1160330 0.168784
492 1069098 0.168749
493 1077784 0.168738
494 1193369 0.168703
495 1179741 0.168684
what I'm trying to achieve is to Min-Max Normalize certain score values corresponding to specific products.
I have a list like this: [1069104, 1069105] (this is just a simplified
example, in reality it would be more than two products) and my goal is to obtain this:
Min-Max Normalize scores corresponding to products 1069104 and 1069105:
product score
0 1179160 0.424654
1 1066490 0.424509
2 1148126 0.422207
3 1069104 1
4 1069105 0
.. ... ...
491 1160330 0.168784
492 1069098 0.168749
493 1077784 0.168738
494 1193369 0.168703
495 1179741 0.168684
I know that exists DataFrame.multiply but checking the examples it works for full columns, and I just one to change those specific values.
A:
<code>
import pandas as pd
df = pd.DataFrame({'product': [1179160, 1066490, 1148126, 1069104, 1069105, 1160330, 1069098, 1077784, 1193369, 1179741],
'score': [0.424654, 0.424509, 0.422207, 0.420455, 0.414603, 0.168784, 0.168749, 0.168738, 0.168703, 0.168684]})
products = [1066490, 1077784, 1179741]
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| Max = df.loc[df['product'].isin(products), 'score'].max()
Min = df.loc[df['product'].isin(products), 'score'].min()
df.loc[df['product'].isin(products), 'score'] = (df.loc[df['product'].isin(products), 'score'] - Min) / (Max - Min)
| {
"problem_id": 19,
"library_problem_id": 19,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 16
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, prod_list = data
Max = df.loc[df["product"].isin(prod_list), "score"].max()
Min = df.loc[df["product"].isin(prod_list), "score"].min()
df.loc[df["product"].isin(prod_list), "score"] = (
df.loc[df["product"].isin(prod_list), "score"] - Min
) / (Max - Min)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"product": [
1179160,
1066490,
1148126,
1069104,
1069105,
1160330,
1069098,
1077784,
1193369,
1179741,
],
"score": [
0.424654,
0.424509,
0.422207,
0.420455,
0.414603,
0.168784,
0.168749,
0.168738,
0.168703,
0.168684,
],
}
)
products = [1066490, 1077784, 1179741]
return df, products
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, products = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
Given a pandas DataFrame, how does one convert several binary columns (where 1 denotes the value exists, 0 denotes it doesn't) into a single categorical column?
Another way to think of this is how to perform the "reverse pd.get_dummies()"?
Here is an example of converting a categorical column into several binary columns:
import pandas as pd
s = pd.Series(list('ABCDAB'))
df = pd.get_dummies(s)
df
A B C D
0 1 0 0 0
1 0 1 0 0
2 0 0 1 0
3 0 0 0 1
4 1 0 0 0
5 0 1 0 0
What I would like to accomplish is given a dataframe
df1
A B C D
0 1 0 0 0
1 0 1 0 0
2 0 0 1 0
3 0 0 0 1
4 1 0 0 0
5 0 1 0 0
could do I convert it into
df1
A B C D category
0 1 0 0 0 A
1 0 1 0 0 B
2 0 0 1 0 C
3 0 0 0 1 D
4 1 0 0 0 A
5 0 1 0 0 B
A:
<code>
import pandas as pd
df = pd.DataFrame({'A': [1, 0, 0, 0, 1, 0],
'B': [0, 1, 0, 0, 0, 1],
'C': [0, 0, 1, 0, 0, 0],
'D': [0, 0, 0, 1, 0, 0]})
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| df["category"] = df.idxmax(axis=1)
| {
"problem_id": 20,
"library_problem_id": 20,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 20
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["category"] = df.idxmax(axis=1)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"A": [1, 0, 0, 0, 1, 0],
"B": [0, 1, 0, 0, 0, 1],
"C": [0, 0, 1, 0, 0, 0],
"D": [0, 0, 0, 1, 0, 0],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"A": [0, 0, 0, 1, 0, 0],
"B": [0, 0, 1, 0, 0, 0],
"C": [0, 1, 0, 0, 0, 1],
"D": [1, 0, 0, 0, 1, 0],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
Given a pandas DataFrame, how does one convert several binary columns (where 0 denotes the value exists, 1 denotes it doesn't) into a single categorical column?
Another way to think of this is how to perform the "reverse pd.get_dummies()"?
What I would like to accomplish is given a dataframe
df1
A B C D
0 0 1 1 1
1 1 0 1 1
2 1 1 0 1
3 1 1 1 0
4 0 1 1 1
5 1 0 1 1
could do I convert it into
df1
A B C D category
0 0 1 1 1 A
1 1 0 1 1 B
2 1 1 0 1 C
3 1 1 1 0 D
4 0 1 1 1 A
5 1 0 1 1 B
A:
<code>
import pandas as pd
df = pd.DataFrame({'A': [0, 1, 1, 1, 0, 1],
'B': [1, 0, 1, 1, 1, 0],
'C': [1, 1, 0, 1, 1, 1],
'D': [1, 1, 1, 0, 1, 1]})
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| df["category"] = df.idxmin(axis=1)
| {
"problem_id": 21,
"library_problem_id": 21,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Semantic",
"perturbation_origin_id": 20
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["category"] = df.idxmin(axis=1)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"A": [0, 1, 1, 1, 0, 1],
"B": [1, 0, 1, 1, 1, 0],
"C": [1, 1, 0, 1, 1, 1],
"D": [1, 1, 1, 0, 1, 1],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"A": [1, 1, 1, 0, 1, 1],
"B": [1, 1, 0, 1, 1, 1],
"C": [1, 0, 1, 1, 1, 0],
"D": [0, 1, 1, 1, 0, 1],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
Given a pandas DataFrame, how does one convert several binary columns (where 1 denotes the value exists, 0 denotes it doesn't) into a single categorical column of lists?
What I would like to accomplish is given a dataframe
df1
A B C D
0 1 0 1 0
1 0 1 1 0
2 0 0 1 0
3 0 0 0 1
4 1 1 1 1
5 0 1 0 0
could do I convert it into
df1
A B C D category
0 1 0 1 0 [A, C]
1 0 1 1 0 [B, C]
2 0 0 1 0 [C]
3 0 0 0 1 [D]
4 1 1 1 1 [A, B, C, D]
5 0 1 0 0 [B]
A:
<code>
import pandas as pd
df = pd.DataFrame({'A': [1, 0, 0, 0, 1, 0],
'B': [0, 1, 0, 0, 1, 1],
'C': [1, 1, 1, 0, 1, 0],
'D': [0, 0, 0, 1, 1, 0]})
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| categories = []
for i in range(len(df)):
l = []
for col in df.columns:
if df[col].iloc[i] == 1:
l.append(col)
categories.append(l)
df["category"] = categories
| {
"problem_id": 22,
"library_problem_id": 22,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 20
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
categories = []
for i in range(len(df)):
l = []
for col in df.columns:
if df[col].iloc[i] == 1:
l.append(col)
categories.append(l)
df["category"] = categories
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"A": [1, 0, 0, 0, 1, 0],
"B": [0, 1, 0, 0, 1, 1],
"C": [1, 1, 1, 0, 1, 0],
"D": [0, 0, 0, 1, 1, 0],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"A": [0, 1, 1, 1, 0, 0],
"B": [1, 0, 1, 1, 0, 1],
"C": [0, 0, 0, 1, 1, 0],
"D": [1, 1, 1, 0, 1, 0],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have the following DF
Date
0 2018-01-01
1 2018-02-08
2 2018-02-08
3 2018-02-08
4 2018-02-08
I want to extract the month name and year in a simple way in the following format:
Date
0 Jan-2018
1 Feb-2018
2 Feb-2018
3 Feb-2018
4 Feb-2018
I have used the df.Date.dt.to_period("M") which returns "2018-01" format.
A:
<code>
import pandas as pd
df = pd.DataFrame({'Date':['2019-01-01','2019-02-08','2019-02-08', '2019-03-08']})
df['Date'] = pd.to_datetime(df['Date'])
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| df['Date'] = df['Date'].dt.strftime('%b-%Y')
| {
"problem_id": 23,
"library_problem_id": 23,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Origin",
"perturbation_origin_id": 23
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["Date"] = df["Date"].dt.strftime("%b-%Y")
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{"Date": ["2019-01-01", "2019-02-08", "2019-02-08", "2019-03-08"]}
)
df["Date"] = pd.to_datetime(df["Date"])
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have the following DF
Date
0 2018-01-01
1 2018-02-08
2 2018-02-08
3 2018-02-08
4 2018-02-08
I want to extract the month name and year and day in a simple way in the following format:
Date
0 01-Jan-2018
1 08-Feb-2018
2 08-Feb-2018
3 08-Feb-2018
4 08-Feb-2018
I have used the df.Date.dt.to_period("M") which returns "2018-01" format.
A:
<code>
import pandas as pd
df = pd.DataFrame({'Date':['2019-01-01','2019-02-08','2019-02-08', '2019-03-08']})
df['Date'] = pd.to_datetime(df['Date'])
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| df['Date'] = df['Date'].dt.strftime('%d-%b-%Y')
| {
"problem_id": 24,
"library_problem_id": 24,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Semantic",
"perturbation_origin_id": 23
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["Date"] = df["Date"].dt.strftime("%d-%b-%Y")
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{"Date": ["2019-01-01", "2019-02-08", "2019-02-08", "2019-03-08"]}
)
df["Date"] = pd.to_datetime(df["Date"])
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have the following DF
Date
0 2018-01-01
1 2018-02-08
2 2018-02-08
3 2018-02-08
4 2018-02-08
I have another list of two date:
[2017-08-17, 2018-01-31]
For data between 2017-08-17 to 2018-01-31,I want to extract the month name and year and day in a simple way in the following format:
Date
0 01-Jan-2018 Tuesday
I have used the df.Date.dt.to_period("M") which returns "2018-01" format.
A:
<code>
import pandas as pd
df = pd.DataFrame({'Date':['2019-01-01','2019-02-08','2019-02-08', '2019-03-08']})
df['Date'] = pd.to_datetime(df['Date'])
List = ['2019-01-17', '2019-02-20']
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| df = df[df['Date'] >= List[0]]
df = df[df['Date'] <= List[1]]
df['Date'] = df['Date'].dt.strftime('%d-%b-%Y %A') | {
"problem_id": 25,
"library_problem_id": 25,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 23
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, List = data
df = df[df["Date"] >= List[0]]
df = df[df["Date"] <= List[1]]
df["Date"] = df["Date"].dt.strftime("%d-%b-%Y %A")
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{"Date": ["2019-01-01", "2019-02-08", "2019-02-08", "2019-03-08"]}
)
df["Date"] = pd.to_datetime(df["Date"])
List = ["2019-01-17", "2019-02-20"]
return df, List
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df,List = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
So I have a dataframe that looks like this:
#1 #2
1980-01-01 11.6985 126.0
1980-01-02 43.6431 134.0
1980-01-03 54.9089 130.0
1980-01-04 63.1225 126.0
1980-01-05 72.4399 120.0
What I want to do is to shift the first row of the first column (11.6985) down 1 row, and then the last row of the first column (72.4399) would be shifted to the first row, first column, like so:
#1 #2
1980-01-01 72.4399 126.0
1980-01-02 11.6985 134.0
1980-01-03 43.6431 130.0
1980-01-04 54.9089 126.0
1980-01-05 63.1225 120.0
The idea is that I want to use these dataframes to find an R^2 value for every shift, so I need to use all the data or it might not work. I have tried to use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shift.html" rel="noreferrer">pandas.Dataframe.shift()</a>:
print(data)
#Output
1980-01-01 11.6985 126.0
1980-01-02 43.6431 134.0
1980-01-03 54.9089 130.0
1980-01-04 63.1225 126.0
1980-01-05 72.4399 120.0
print(data.shift(1,axis = 0))
1980-01-01 NaN NaN
1980-01-02 11.6985 126.0
1980-01-03 43.6431 134.0
1980-01-04 54.9089 130.0
1980-01-05 63.1225 126.0
So it just shifts both columns down and gets rid of the last row of data, which is not what I want.
Any advice?
A:
<code>
import pandas as pd
df = pd.DataFrame({'#1': [11.6985, 43.6431, 54.9089, 63.1225, 72.4399],
'#2': [126.0, 134.0, 130.0, 126.0, 120.0]},
index=['1980-01-01', '1980-01-02', '1980-01-03', '1980-01-04', '1980-01-05'])
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| import numpy as np
df['#1'] = np.roll(df['#1'], shift=1) | {
"problem_id": 26,
"library_problem_id": 26,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 26
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["#1"] = np.roll(df["#1"], shift=1)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"#1": [11.6985, 43.6431, 54.9089, 63.1225, 72.4399],
"#2": [126.0, 134.0, 130.0, 126.0, 120.0],
},
index=[
"1980-01-01",
"1980-01-02",
"1980-01-03",
"1980-01-04",
"1980-01-05",
],
)
elif test_case_id == 2:
df = pd.DataFrame(
{"#1": [45, 51, 14, 11, 14], "#2": [126.0, 134.0, 130.0, 126.0, 120.0]},
index=[
"1980-01-01",
"1980-01-02",
"1980-01-03",
"1980-01-04",
"1980-01-05",
],
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
So I have a dataframe that looks like this:
#1 #2
1980-01-01 11.6985 126.0
1980-01-02 43.6431 134.0
1980-01-03 54.9089 130.0
1980-01-04 63.1225 126.0
1980-01-05 72.4399 120.0
What I want to do is to shift the last row of the first column (72.4399) up 1 row, and then the first row of the first column (11.6985) would be shifted to the last row, first column, like so:
#1 #2
1980-01-01 43.6431 126.0
1980-01-02 54.9089 134.0
1980-01-03 63.1225 130.0
1980-01-04 72.4399 126.0
1980-01-05 11.6985 120.0
The idea is that I want to use these dataframes to find an R^2 value for every shift, so I need to use all the data or it might not work. I have tried to use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shift.html" rel="noreferrer">pandas.Dataframe.shift()</a>:
print(data)
#Output
1980-01-01 11.6985 126.0
1980-01-02 43.6431 134.0
1980-01-03 54.9089 130.0
1980-01-04 63.1225 126.0
1980-01-05 72.4399 120.0
print(data.shift(1,axis = 0))
1980-01-01 NaN NaN
1980-01-02 11.6985 126.0
1980-01-03 43.6431 134.0
1980-01-04 54.9089 130.0
1980-01-05 63.1225 126.0
So it just shifts both columns down and gets rid of the last row of data, which is not what I want.
Any advice?
A:
<code>
import pandas as pd
df = pd.DataFrame({'#1': [11.6985, 43.6431, 54.9089, 63.1225, 72.4399],
'#2': [126.0, 134.0, 130.0, 126.0, 120.0]},
index=['1980-01-01', '1980-01-02', '1980-01-03', '1980-01-04', '1980-01-05'])
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| import numpy as np
df['#1'] = np.roll(df['#1'], shift=-1) | {
"problem_id": 27,
"library_problem_id": 27,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Semantic",
"perturbation_origin_id": 26
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["#1"] = np.roll(df["#1"], shift=-1)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"#1": [11.6985, 43.6431, 54.9089, 63.1225, 72.4399],
"#2": [126.0, 134.0, 130.0, 126.0, 120.0],
},
index=[
"1980-01-01",
"1980-01-02",
"1980-01-03",
"1980-01-04",
"1980-01-05",
],
)
elif test_case_id == 2:
df = pd.DataFrame(
{"#1": [45, 51, 14, 11, 14], "#2": [126.0, 134.0, 130.0, 126.0, 120.0]},
index=[
"1980-01-01",
"1980-01-02",
"1980-01-03",
"1980-01-04",
"1980-01-05",
],
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
So I have a dataframe that looks like this:
#1 #2
1980-01-01 11.6985 126.0
1980-01-02 43.6431 134.0
1980-01-03 54.9089 130.0
1980-01-04 63.1225 126.0
1980-01-05 72.4399 120.0
What I want to do is to shift the first row of the first column (11.6985) down 1 row, and then the last row of the first column (72.4399) would be shifted to the first row, first column.
Then shift the last row of the second column up 1 row, and then the first row of the second column would be shifted to the last row, first column, like so:
#1 #2
1980-01-01 72.4399 134.0
1980-01-02 11.6985 130.0
1980-01-03 43.6431 126.0
1980-01-04 54.9089 120.0
1980-01-05 63.1225 126.0
The idea is that I want to use these dataframes to find an R^2 value for every shift, so I need to use all the data or it might not work. I have tried to use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shift.html" rel="noreferrer">pandas.Dataframe.shift()</a>:
print(data)
#Output
1980-01-01 11.6985 126.0
1980-01-02 43.6431 134.0
1980-01-03 54.9089 130.0
1980-01-04 63.1225 126.0
1980-01-05 72.4399 120.0
print(data.shift(1,axis = 0))
1980-01-01 NaN NaN
1980-01-02 11.6985 126.0
1980-01-03 43.6431 134.0
1980-01-04 54.9089 130.0
1980-01-05 63.1225 126.0
So it just shifts both columns down and gets rid of the last row of data, which is not what I want.
Any advice?
A:
<code>
import pandas as pd
df = pd.DataFrame({'#1': [11.6985, 43.6431, 54.9089, 63.1225, 72.4399],
'#2': [126.0, 134.0, 130.0, 126.0, 120.0]},
index=['1980-01-01', '1980-01-02', '1980-01-03', '1980-01-04', '1980-01-05'])
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| import numpy as np
df['#1'] = np.roll(df['#1'], shift=1)
df['#2'] = np.roll(df['#2'], shift=-1) | {
"problem_id": 28,
"library_problem_id": 28,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 26
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df["#1"] = np.roll(df["#1"], shift=1)
df["#2"] = np.roll(df["#2"], shift=-1)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"#1": [11.6985, 43.6431, 54.9089, 63.1225, 72.4399],
"#2": [126.0, 134.0, 130.0, 126.0, 120.0],
},
index=[
"1980-01-01",
"1980-01-02",
"1980-01-03",
"1980-01-04",
"1980-01-05",
],
)
elif test_case_id == 2:
df = pd.DataFrame(
{"#1": [45, 51, 14, 11, 14], "#2": [126.0, 134.0, 130.0, 126.0, 120.0]},
index=[
"1980-01-01",
"1980-01-02",
"1980-01-03",
"1980-01-04",
"1980-01-05",
],
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
So I have a dataframe that looks like this:
#1 #2
1980-01-01 11.6985 126.0
1980-01-02 43.6431 134.0
1980-01-03 54.9089 130.0
1980-01-04 63.1225 126.0
1980-01-05 72.4399 120.0
What I want to do is to shift the first row of the first column (11.6985) down 1 row, and then the last row of the first column (72.4399) would be shifted to the first row, first column, like so:
#1 #2
1980-01-01 72.4399 126.0
1980-01-02 11.6985 134.0
1980-01-03 43.6431 130.0
1980-01-04 54.9089 126.0
1980-01-05 63.1225 120.0
I want to know how many times after doing this, I can get a Dataframe that minimizes the R^2 values of the first and second columns. I need to output this dataframe:
#1 #2
1980-01-01 43.6431 126.0
1980-01-02 54.9089 134.0
1980-01-03 63.1225 130.0
1980-01-04 72.4399 126.0
1980-01-05 11.6985 120.0
Any advice?
A:
<code>
import pandas as pd
df = pd.DataFrame({'#1': [11.6985, 43.6431, 54.9089, 63.1225, 72.4399],
'#2': [126.0, 134.0, 130.0, 126.0, 120.0]},
index=['1980-01-01', '1980-01-02', '1980-01-03', '1980-01-04', '1980-01-05'])
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| import numpy as np
def g(df):
sh = 0
min_R2 = 0
for i in range(len(df)):
min_R2 += (df['#1'].iloc[i]-df['#2'].iloc[i])**2
for i in range(len(df)):
R2 = 0
for j in range(len(df)):
R2 += (df['#1'].iloc[j] - df['#2'].iloc[j]) ** 2
if min_R2 > R2:
sh = i
min_R2 = R2
df['#1'] = np.roll(df['#1'], shift=1)
df['#1'] = np.roll(df['#1'], shift=sh)
return df
df = g(df)
| {
"problem_id": 29,
"library_problem_id": 29,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 26
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
sh = 0
min_R2 = 0
for i in range(len(df)):
min_R2 += (df["#1"].iloc[i] - df["#2"].iloc[i]) ** 2
for i in range(len(df)):
R2 = 0
for j in range(len(df)):
R2 += (df["#1"].iloc[j] - df["#2"].iloc[j]) ** 2
if min_R2 > R2:
sh = i
min_R2 = R2
df["#1"] = np.roll(df["#1"], shift=1)
df["#1"] = np.roll(df["#1"], shift=sh)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"#1": [11.6985, 43.6431, 54.9089, 63.1225, 72.4399],
"#2": [126.0, 134.0, 130.0, 126.0, 120.0],
},
index=[
"1980-01-01",
"1980-01-02",
"1980-01-03",
"1980-01-04",
"1980-01-05",
],
)
elif test_case_id == 2:
df = pd.DataFrame(
{"#1": [45, 51, 14, 11, 14], "#2": [126.0, 134.0, 130.0, 126.0, 120.0]},
index=[
"1980-01-01",
"1980-01-02",
"1980-01-03",
"1980-01-04",
"1980-01-05",
],
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
Considering a simple df:
HeaderA | HeaderB | HeaderC
476 4365 457
Is there a way to rename all columns, for example to add to all columns an "X" in the end?
HeaderAX | HeaderBX | HeaderCX
476 4365 457
I am concatenating multiple dataframes and want to easily differentiate the columns dependent on which dataset they came from.
Or is this the only way?
df.rename(columns={'HeaderA': 'HeaderAX'}, inplace=True)
I have over 50 column headers and ten files; so the above approach will take a long time.
Thank You
A:
<code>
import pandas as pd
df = pd.DataFrame(
{'HeaderA': [476],
'HeaderB': [4365],
'HeaderC': [457]})
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.add_suffix('X')
df = g(df.copy())
| {
"problem_id": 30,
"library_problem_id": 30,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 30
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.add_suffix("X")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame({"HeaderA": [476], "HeaderB": [4365], "HeaderC": [457]})
if test_case_id == 2:
df = pd.DataFrame({"HeaderD": [114], "HeaderF": [4365], "HeaderG": [514]})
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
Considering a simple df:
HeaderA | HeaderB | HeaderC
476 4365 457
Is there a way to rename all columns, for example to add to all columns an "X" in the head?
XHeaderA | XHeaderB | XHeaderC
476 4365 457
I am concatenating multiple dataframes and want to easily differentiate the columns dependent on which dataset they came from.
I have over 50 column headers and ten files; so the above approach will take a long time.
Thank You
A:
<code>
import pandas as pd
df = pd.DataFrame(
{'HeaderA': [476],
'HeaderB': [4365],
'HeaderC': [457]})
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.add_prefix('X')
df = g(df.copy())
| {
"problem_id": 31,
"library_problem_id": 31,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Semantic",
"perturbation_origin_id": 30
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.add_prefix("X")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame({"HeaderA": [476], "HeaderB": [4365], "HeaderC": [457]})
if test_case_id == 2:
df = pd.DataFrame({"HeaderD": [114], "HeaderF": [4365], "HeaderG": [514]})
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
Considering a simple df:
HeaderA | HeaderB | HeaderC | HeaderX
476 4365 457 345
Is there a way to rename all columns, for example to add to columns which don’t end with "X" and add to all columns an "X" in the head?
XHeaderAX | XHeaderBX | XHeaderCX | XHeaderX
476 4365 457 345
I am concatenating multiple dataframes and want to easily differentiate the columns dependent on which dataset they came from.
Or is this the only way?
df.rename(columns={'HeaderA': 'HeaderAX'}, inplace=True)
I have over 50 column headers and ten files; so the above approach will take a long time.
Thank You
A:
<code>
import pandas as pd
df = pd.DataFrame(
{'HeaderA': [476],
'HeaderB': [4365],
'HeaderC': [457],
"HeaderX": [345]})
</code>
df = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
for col in df.columns:
if not col.endswith('X'):
df.rename(columns={col: col+'X'}, inplace=True)
return df.add_prefix('X')
df = g(df.copy())
| {
"problem_id": 32,
"library_problem_id": 32,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 30
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
for col in df.columns:
if not col.endswith("X"):
df.rename(columns={col: col + "X"}, inplace=True)
return df.add_prefix("X")
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"HeaderA": [476],
"HeaderB": [4365],
"HeaderC": [457],
"HeaderX": [345],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"HeaderD": [114],
"HeaderF": [4365],
"HeaderG": [514],
"HeaderX": [345],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
result = df
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a script that generates a pandas data frame with a varying number of value columns. As an example, this df might be
import pandas as pd
df = pd.DataFrame({
'group': ['A', 'A', 'A', 'B', 'B'],
'group_color' : ['green', 'green', 'green', 'blue', 'blue'],
'val1': [5, 2, 3, 4, 5],
'val2' : [4, 2, 8, 5, 7]
})
group group_color val1 val2
0 A green 5 4
1 A green 2 2
2 A green 3 8
3 B blue 4 5
4 B blue 5 7
My goal is to get the grouped mean for each of the value columns. In this specific case (with 2 value columns), I can use
df.groupby('group').agg({"group_color": "first", "val1": "mean", "val2": "mean"})
group_color val1 val2
group
A green 3.333333 4.666667
B blue 4.500000 6.000000
but that does not work when the data frame in question has more value columns (val3, val4 etc.).
Is there a way to dynamically take the mean of "all the other columns" or "all columns containing val in their names"?
A:
<code>
import pandas as pd
df = pd.DataFrame({ 'group': ['A', 'A', 'A', 'B', 'B'], 'group_color' : ['green', 'green', 'green', 'blue', 'blue'], 'val1': [5, 2, 3, 4, 5], 'val2' : [4, 2, 8, 5, 7],'val3':[1,1,4,5,1] })
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.groupby('group').agg(lambda x : x.head(1) if x.dtype=='object' else x.mean())
result = g(df.copy())
| {
"problem_id": 33,
"library_problem_id": 33,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Origin",
"perturbation_origin_id": 33
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.groupby("group").agg(
lambda x: x.head(1) if x.dtype == "object" else x.mean()
)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"group": ["A", "A", "A", "B", "B"],
"group_color": ["green", "green", "green", "blue", "blue"],
"val1": [5, 2, 3, 4, 5],
"val2": [4, 2, 8, 5, 7],
"val3": [1, 1, 4, 5, 1],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a script that generates a pandas data frame with a varying number of value columns. As an example, this df might be
import pandas as pd
df = pd.DataFrame({
'group': ['A', 'A', 'A', 'B', 'B'],
'group_color' : ['green', 'green', 'green', 'blue', 'blue'],
'val1': [5, 2, 3, 4, 5],
'val2' : [4, 2, 8, 5, 7]
})
group group_color val1 val2
0 A green 5 4
1 A green 2 2
2 A green 3 8
3 B blue 4 5
4 B blue 5 7
My goal is to get the grouped sum for each of the value columns. In this specific case (with 2 value columns), I can use
df.groupby('group').agg({"group_color": "first", "val1": "sum", "val2": "sum"})
group_color val1 val2
group
A green 10 14
B blue 9 12
but that does not work when the data frame in question has more value columns (val3, val4 etc.).
Is there a way to dynamically take the sum of "all the other columns" or "all columns containing val in their names"?
A:
<code>
import pandas as pd
df = pd.DataFrame({ 'group': ['A', 'A', 'A', 'B', 'B'], 'group_color' : ['green', 'green', 'green', 'blue', 'blue'], 'val1': [5, 2, 3, 4, 5], 'val2' : [4, 2, 8, 5, 7],'val3':[1,1,4,5,1] })
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.groupby('group').agg(lambda x : x.head(1) if x.dtype=='object' else x.sum())
result = g(df.copy())
| {
"problem_id": 34,
"library_problem_id": 34,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Semantic",
"perturbation_origin_id": 33
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.groupby("group").agg(
lambda x: x.head(1) if x.dtype == "object" else x.sum()
)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"group": ["A", "A", "A", "B", "B"],
"group_color": ["green", "green", "green", "blue", "blue"],
"val1": [5, 2, 3, 4, 5],
"val2": [4, 2, 8, 5, 7],
"val3": [1, 1, 4, 5, 1],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a script that generates a pandas data frame with a varying number of value columns. As an example, this df might be
import pandas as pd
df = pd.DataFrame({
'group': ['A', 'A', 'A', 'B', 'B'],
'group_color' : ['green', 'green', 'green', 'blue', 'blue'],
'val1': [5, 2, 3, 4, 5],
'val2' : [4, 2, 8, 5, 7]
})
group group_color val1 val2 val32
0 A green 5 4 4
1 A green 2 2 2
2 A green 3 8 8
3 B blue 4 5 5
4 B blue 5 7 7
My goal is to get the grouped mean for each of the value columns which end with '2' and get the grouped sum for others.
df.groupby('group').agg({"group_color": "first", "val1": "sum", "val2": "mean", "val32": "mean"})
group_color val1 val2 val32
group
A green 10.0 4.666667 4.666667
B blue 9.0 6.000000 6.000000
but that does not work when the data frame in question has more value columns (val3, val4 etc.).
Is there a dynamical way?
A:
<code>
import pandas as pd
df = pd.DataFrame({ 'group': ['A', 'A', 'A', 'B', 'B'], 'group_color' : ['green', 'green', 'green', 'blue', 'blue'], 'val1': [5, 2, 3, 4, 5], 'val2' : [4, 2, 8, 5, 7],'val42':[1,1,4,5,1] })
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.groupby('group').agg(lambda x : x.head(1) if x.dtype=='object' else x.mean() if x.name.endswith('2') else x.sum())
result = g(df.copy())
| {
"problem_id": 35,
"library_problem_id": 35,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 33
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.groupby("group").agg(
lambda x: (
x.head(1)
if x.dtype == "object"
else x.mean() if x.name.endswith("2") else x.sum()
)
)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"group": ["A", "A", "A", "B", "B"],
"group_color": ["green", "green", "green", "blue", "blue"],
"val1": [5, 2, 3, 4, 5],
"val2": [4, 2, 8, 5, 7],
"val42": [1, 1, 4, 5, 1],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"group": ["A", "A", "A", "B", "B"],
"group_color": ["green", "green", "green", "blue", "blue"],
"val1": [5, 2, 3, 4, 5],
"val2": [4, 2, 8, 5, 7],
"val332": [1, 1, 4, 5, 1],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have pandas df with say, 100 rows, 10 columns, (actual data is huge). I also have row_index list which contains, which rows to be considered to take mean. I want to calculate mean on say columns 2,5,6,7 and 8. Can we do it with some function for dataframe object?
What I know is do a for loop, get value of row for each element in row_index and keep doing mean. Do we have some direct function where we can pass row_list, and column_list and axis, for ex df.meanAdvance(row_list,column_list,axis=0) ?
I have seen DataFrame.mean() but it didn't help I guess.
a b c d q
0 1 2 3 0 5
1 1 2 3 4 5
2 1 1 1 6 1
3 1 0 0 0 0
I want mean of 0, 2, 3 rows for each a, b, d columns
a 1.0
b 1.0
d 2.0
A:
<code>
import pandas as pd
df = pd.DataFrame({'a':[1,1,1,1],'b':[2,2,1,0],'c':[3,3,1,0],'d':[0,4,6,0],'q':[5,5,1,0]})
row_list = [0,2,3]
column_list = ['a','b','d']
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df, row_list, column_list):
return df[column_list].iloc[row_list].mean(axis=0)
result = g(df.copy(),row_list,column_list)
| {
"problem_id": 36,
"library_problem_id": 36,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 36
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, row_list, column_list = data
return df[column_list].iloc[row_list].mean(axis=0)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"a": [1, 1, 1, 1],
"b": [2, 2, 1, 0],
"c": [3, 3, 1, 0],
"d": [0, 4, 6, 0],
"q": [5, 5, 1, 0],
}
)
row_list = [0, 2, 3]
column_list = ["a", "b", "d"]
if test_case_id == 2:
df = pd.DataFrame(
{
"a": [1, 1, 1, 1],
"b": [2, 2, 1, 0],
"c": [3, 3, 1, 0],
"d": [0, 4, 6, 0],
"q": [5, 5, 1, 0],
}
)
row_list = [0, 1, 3]
column_list = ["a", "c", "q"]
return df, row_list, column_list
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_series_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, row_list, column_list = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "while" not in tokens and "for" not in tokens
|
Problem:
I have pandas df with say, 100 rows, 10 columns, (actual data is huge). I also have row_index list which contains, which rows to be considered to take sum. I want to calculate sum on say columns 2,5,6,7 and 8. Can we do it with some function for dataframe object?
What I know is do a for loop, get value of row for each element in row_index and keep doing sum. Do we have some direct function where we can pass row_list, and column_list and axis, for ex df.sumAdvance(row_list,column_list,axis=0) ?
I have seen DataFrame.sum() but it didn't help I guess.
a b c d q
0 1 2 3 0 5
1 1 2 3 4 5
2 1 1 1 6 1
3 1 0 0 0 0
I want sum of 0, 2, 3 rows for each a, b, d columns
a 3.0
b 3.0
d 6.0
A:
<code>
import pandas as pd
df = pd.DataFrame({'a':[1,1,1,1],'b':[2,2,1,0],'c':[3,3,1,0],'d':[0,4,6,0],'q':[5,5,1,0]})
row_list = [0,2,3]
column_list = ['a','b','d']
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df, row_list, column_list):
return df[column_list].iloc[row_list].sum(axis=0)
result = g(df.copy(), row_list, column_list)
| {
"problem_id": 37,
"library_problem_id": 37,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Semantic",
"perturbation_origin_id": 36
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, row_list, column_list = data
return df[column_list].iloc[row_list].sum(axis=0)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"a": [1, 1, 1, 1],
"b": [2, 2, 1, 0],
"c": [3, 3, 1, 0],
"d": [0, 4, 6, 0],
"q": [5, 5, 1, 0],
}
)
row_list = [0, 2, 3]
column_list = ["a", "b", "d"]
if test_case_id == 2:
df = pd.DataFrame(
{
"a": [1, 1, 1, 1],
"b": [2, 2, 1, 0],
"c": [3, 3, 1, 0],
"d": [0, 4, 6, 0],
"q": [5, 5, 1, 0],
}
)
row_list = [0, 1, 3]
column_list = ["a", "c", "q"]
return df, row_list, column_list
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_series_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, row_list, column_list = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "while" not in tokens and "for" not in tokens
|
Problem:
I have pandas df with say, 100 rows, 10 columns, (actual data is huge). I also have row_index list which contains, which rows to be considered to take sum. I want to calculate sum on say columns 2,5,6,7 and 8. Can we do it with some function for dataframe object?
What I know is do a for loop, get value of row for each element in row_index and keep doing sum. Do we have some direct function where we can pass row_list, and column_list and axis, for ex df.sumAdvance(row_list,column_list,axis=0) ?
I have seen DataFrame.sum() but it didn't help I guess.
a b c d q
0 1 2 3 0 5
1 1 2 3 4 5
2 1 1 1 6 1
3 1 0 0 0 0
I want sum of 0, 2, 3 rows for each a, b, d columns
a 3.0
b 3.0
d 6.0
Then I want to delete the largest one. Desired:
a 3.0
b 3.0
A:
<code>
import pandas as pd
df = pd.DataFrame({'a':[1,1,1,1],'b':[2,2,1,0],'c':[3,3,1,0],'d':[0,4,6,0],'q':[5,5,1,0]})
row_list = [0,2,3]
column_list = ['a','b','d']
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df, row_list, column_list):
result = df[column_list].iloc[row_list].sum(axis=0)
return result.drop(result.index[result.argmax()])
result = g(df.copy(), row_list, column_list)
| {
"problem_id": 38,
"library_problem_id": 38,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 36
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, row_list, column_list = data
result = df[column_list].iloc[row_list].sum(axis=0)
return result.drop(result.index[result.argmax()])
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"a": [1, 1, 1, 1],
"b": [2, 2, 1, 0],
"c": [3, 3, 1, 0],
"d": [0, 4, 6, 0],
"q": [5, 5, 1, 0],
}
)
row_list = [0, 2, 3]
column_list = ["a", "b", "d"]
if test_case_id == 2:
df = pd.DataFrame(
{
"a": [1, 1, 1, 1],
"b": [2, 2, 1, 0],
"c": [3, 3, 1, 0],
"d": [0, 4, 6, 0],
"q": [5, 5, 1, 0],
}
)
row_list = [0, 1, 3]
column_list = ["a", "c", "q"]
return df, row_list, column_list
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_series_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, row_list, column_list = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "while" not in tokens and "for" not in tokens
|
Problem:
I have a dataframe with numerous columns (≈30) from an external source (csv file) but several of them have no value or always the same. Thus, I would to see quickly the value_counts for each column. How can i do that?
For example
id, temp, name
1 34, null, mark
2 22, null, mark
3 34, null, mark
Please return a Series like this:
id 22 1.0
34 2.0
temp null 3.0
name mark 3.0
dtype: float64
So I would know that temp is irrelevant and name is not interesting (always the same)
A:
<code>
import pandas as pd
df = pd.DataFrame(data=[[34, 'null', 'mark'], [22, 'null', 'mark'], [34, 'null', 'mark']], columns=['id', 'temp', 'name'], index=[1, 2, 3])
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.apply(lambda x: x.value_counts()).T.stack()
result = g(df.copy())
| {
"problem_id": 39,
"library_problem_id": 39,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 39
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.apply(lambda x: x.value_counts()).T.stack()
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
data=[[34, "null", "mark"], [22, "null", "mark"], [34, "null", "mark"]],
columns=["id", "temp", "name"],
index=[1, 2, 3],
)
if test_case_id == 2:
df = pd.DataFrame(
data=[
[34, "null", "mark"],
[22, "null", "mark"],
[34, "null", "mark"],
[21, "null", "mark"],
],
columns=["id", "temp", "name"],
index=[1, 2, 3, 4],
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_series_equal(result, ans)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a dataframe with numerous columns (≈30) from an external source (csv file) but several of them have no value or always the same. Thus, I would to see quickly the counts of 'null' for each column. How can i do that?
For example
id, temp, name
1 34, null, null
2 22, null, mark
3 34, null, mark
Please return a Series like this:
id NaN
temp 3.0
name 1.0
Name: null, dtype: float64
So I would know that temp is irrelevant and name is not interesting (always the same)
A:
<code>
import pandas as pd
df = pd.DataFrame(data=[[34, 'null', 'null'], [22, 'null', 'mark'], [34, 'null', 'mark']], columns=['id', 'temp', 'name'], index=[1, 2, 3])
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.apply(lambda x: x.value_counts()).T.null
result = g(df.copy())
| {
"problem_id": 40,
"library_problem_id": 40,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Semantic",
"perturbation_origin_id": 39
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.apply(lambda x: x.value_counts()).T.null
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
data=[[34, "null", "null"], [22, "null", "mark"], [34, "null", "mark"]],
columns=["id", "temp", "name"],
index=[1, 2, 3],
)
if test_case_id == 2:
df = pd.DataFrame(
data=[[34, "null", "null"], [22, "null", "mark"], [34, "null", "null"]],
columns=["id", "temp", "name"],
index=[1, 2, 3],
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_series_equal(result, ans)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a dataframe with numerous columns (≈30) from an external source (csv file) but several of them have no value or always the same. Thus, I would to see quickly the value_counts for each column. How can i do that?
For example
id, temp, name
1 34, null, mark
2 22, null, mark
3 34, null, mark
Please return a String like this:
---- id ---
34 2
22 1
Name: id, dtype: int64
---- temp ---
null 3
Name: temp, dtype: int64
---- name ---
mark 3
Name: name, dtype: int64
So I would know that temp is irrelevant and name is not interesting (always the same)
A:
<code>
import pandas as pd
df = pd.DataFrame(data=[[34, 'null', 'mark'], [22, 'null', 'mark'], [34, 'null', 'mark']], columns=['id', 'temp', 'name'], index=[1, 2, 3])
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
s = ''
for c in df.columns:
s += "---- %s ---" % c
s += "\n"
s += str(df[c].value_counts())
s += "\n"
return s
result = g(df.copy())
| {
"problem_id": 41,
"library_problem_id": 41,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 39
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
s = ""
for c in df.columns:
s += "---- %s ---" % c
s += "\n"
s += str(df[c].value_counts())
s += "\n"
return s
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
data=[[34, "null", "mark"], [22, "null", "mark"], [34, "null", "mark"]],
columns=["id", "temp", "name"],
index=[1, 2, 3],
)
elif test_case_id == 2:
df = pd.DataFrame(
data=[[11, "null", "mark"], [14, "null", "mark"], [51, "null", "mark"]],
columns=["id", "temp", "name"],
index=[1, 2, 3],
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
assert result == ans
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I am trying to clean up a Excel file for some further research. Problem that I have, I want to merge the first and second row. The code which I have now:
xl = pd.ExcelFile("nanonose.xls")
df = xl.parse("Sheet1")
df = df.drop('Unnamed: 2', axis=1)
## Tried this line but no luck
##print(df.head().combine_first(df.iloc[[0]]))
The output of this is:
Nanonose Unnamed: 1 A B C D E \
0 Sample type Concentration NaN NaN NaN NaN NaN
1 Water 9200 95.5 21.0 6.0 11.942308 64.134615
2 Water 9200 94.5 17.0 5.0 5.484615 63.205769
3 Water 9200 92.0 16.0 3.0 11.057692 62.586538
4 Water 4600 53.0 7.5 2.5 3.538462 35.163462
F G H
0 NaN NaN NaN
1 21.498560 5.567840 1.174135
2 19.658560 4.968000 1.883444
3 19.813120 5.192480 0.564835
4 6.876207 1.641724 0.144654
So, my goal is to merge the first and second row to get: Sample type | Concentration | A | B | C | D | E | F | G | H
Could someone help me merge these two rows?
A:
<code>
import pandas as pd
import numpy as np
df = pd.DataFrame({'Nanonose': ['Sample type','Water','Water','Water','Water'],
'Unnamed: 1': ['Concentration',9200,9200,9200,4600],
'A': [np.nan,95.5,94.5,92.0,53.0,],
'B': [np.nan,21.0,17.0,16.0,7.5],
'C': [np.nan,6.0,5.0,3.0,2.5],
'D': [np.nan,11.942308,5.484615,11.057692,3.538462],
'E': [np.nan,64.134615,63.205769,62.586538,35.163462],
'F': [np.nan,21.498560,19.658560,19.813120,6.876207],
'G': [np.nan,5.567840,4.968000,5.192480,1.641724],
'H': [np.nan,1.174135,1.883444,0.564835,0.144654]})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
df.columns = np.concatenate([df.iloc[0, :2], df.columns[2:]])
df = df.iloc[1:].reset_index(drop=True)
return df
result = g(df.copy())
| {
"problem_id": 42,
"library_problem_id": 42,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 42
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df.columns = np.concatenate([df.iloc[0, :2], df.columns[2:]])
df = df.iloc[1:].reset_index(drop=True)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"Nanonose": ["Sample type", "Water", "Water", "Water", "Water"],
"Unnamed: 1": ["Concentration", 9200, 9200, 9200, 4600],
"A": [
np.nan,
95.5,
94.5,
92.0,
53.0,
],
"B": [np.nan, 21.0, 17.0, 16.0, 7.5],
"C": [np.nan, 6.0, 5.0, 3.0, 2.5],
"D": [np.nan, 11.942308, 5.484615, 11.057692, 3.538462],
"E": [np.nan, 64.134615, 63.205769, 62.586538, 35.163462],
"F": [np.nan, 21.498560, 19.658560, 19.813120, 6.876207],
"G": [np.nan, 5.567840, 4.968000, 5.192480, 1.641724],
"H": [np.nan, 1.174135, 1.883444, 0.564835, 0.144654],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"Nanonose": ["type of Sample", "Water", "Water", "Water", "Water"],
"Unnamed: 1": ["concentration", 9200, 9200, 9200, 4600],
"A": [
np.nan,
95.5,
94.5,
92.0,
53.0,
],
"B": [np.nan, 21.0, 17.0, 16.0, 7.5],
"C": [np.nan, 6.0, 5.0, 3.0, 2.5],
"D": [np.nan, 11.942308, 5.484615, 11.057692, 3.538462],
"E": [np.nan, 64.134615, 63.205769, 62.586538, 35.163462],
"F": [np.nan, 21.498560, 19.658560, 19.813120, 6.876207],
"G": [np.nan, 5.567840, 4.968000, 5.192480, 1.641724],
"H": [np.nan, 1.174135, 1.883444, 0.564835, 0.144654],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I am trying to clean up a Excel file for some further research. Problem that I have, I want to merge the first and second row. The code which I have now:
xl = pd.ExcelFile("nanonose.xls")
df = xl.parse("Sheet1")
df = df.drop('Unnamed: 2', axis=1)
## Tried this line but no luck
##print(df.head().combine_first(df.iloc[[0]]))
The output of this is:
Nanonose Unnamed: 1 A B C D E \
0 Sample type Concentration NaN NaN NaN NaN NaN
1 Water 9200 95.5 21.0 6.0 11.942308 64.134615
2 Water 9200 94.5 17.0 5.0 5.484615 63.205769
3 Water 9200 92.0 16.0 3.0 11.057692 62.586538
4 Water 4600 53.0 7.5 2.5 3.538462 35.163462
F G H
0 NaN NaN NaN
1 21.498560 5.567840 1.174135
2 19.658560 4.968000 1.883444
3 19.813120 5.192480 0.564835
4 6.876207 1.641724 0.144654
So, my goal is to merge the first and second row to get: Nanonose | Concentration | A | B | C | D | E | F | G | H
Could someone help me merge these two rows?
A:
<code>
import pandas as pd
import numpy as np
df = pd.DataFrame({'Nanonose': ['Sample type','Water','Water','Water','Water'],
'Unnamed: 1': ['Concentration',9200,9200,9200,4600],
'A': [np.nan,95.5,94.5,92.0,53.0,],
'B': [np.nan,21.0,17.0,16.0,7.5],
'C': [np.nan,6.0,5.0,3.0,2.5],
'D': [np.nan,11.942308,5.484615,11.057692,3.538462],
'E': [np.nan,64.134615,63.205769,62.586538,35.163462],
'F': [np.nan,21.498560,19.658560,19.813120,6.876207],
'G': [np.nan,5.567840,4.968000,5.192480,1.641724],
'H': [np.nan,1.174135,1.883444,0.564835,0.144654]})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
df.columns = np.concatenate([df.columns[0:1], df.iloc[0, 1:2], df.columns[2:]])
df = df.iloc[1:].reset_index(drop=True)
return df
result = g(df.copy())
| {
"problem_id": 43,
"library_problem_id": 43,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Semantic",
"perturbation_origin_id": 42
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
df.columns = np.concatenate([df.columns[0:1], df.iloc[0, 1:2], df.columns[2:]])
df = df.iloc[1:].reset_index(drop=True)
return df
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{
"Nanonose": ["Sample type", "Water", "Water", "Water", "Water"],
"Unnamed: 1": ["Concentration", 9200, 9200, 9200, 4600],
"A": [
np.nan,
95.5,
94.5,
92.0,
53.0,
],
"B": [np.nan, 21.0, 17.0, 16.0, 7.5],
"C": [np.nan, 6.0, 5.0, 3.0, 2.5],
"D": [np.nan, 11.942308, 5.484615, 11.057692, 3.538462],
"E": [np.nan, 64.134615, 63.205769, 62.586538, 35.163462],
"F": [np.nan, 21.498560, 19.658560, 19.813120, 6.876207],
"G": [np.nan, 5.567840, 4.968000, 5.192480, 1.641724],
"H": [np.nan, 1.174135, 1.883444, 0.564835, 0.144654],
}
)
if test_case_id == 2:
df = pd.DataFrame(
{
"Nanonose": ["type of Sample", "Water", "Water", "Water", "Water"],
"Unnamed: 1": ["concentration", 9200, 9200, 9200, 4600],
"A": [
np.nan,
95.5,
94.5,
92.0,
53.0,
],
"B": [np.nan, 21.0, 17.0, 16.0, 7.5],
"C": [np.nan, 6.0, 5.0, 3.0, 2.5],
"D": [np.nan, 11.942308, 5.484615, 11.057692, 3.538462],
"E": [np.nan, 64.134615, 63.205769, 62.586538, 35.163462],
"F": [np.nan, 21.498560, 19.658560, 19.813120, 6.876207],
"G": [np.nan, 5.567840, 4.968000, 5.192480, 1.641724],
"H": [np.nan, 1.174135, 1.883444, 0.564835, 0.144654],
}
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a DataFrame like :
0 1 2
0 0.0 1.0 2.0
1 NaN 1.0 2.0
2 NaN NaN 2.0
What I want to get is
Out[116]:
0 1 2
0 0.0 1.0 2.0
1 1.0 2.0 NaN
2 2.0 NaN NaN
This is my approach as of now.
df.apply(lambda x : (x[x.notnull()].values.tolist()+x[x.isnull()].values.tolist()),1)
Out[117]:
0 1 2
0 0.0 1.0 2.0
1 1.0 2.0 NaN
2 2.0 NaN NaN
Is there any efficient way to achieve this ? apply Here is way to slow .
Thank you for your assistant!:)
My real data size
df.shape
Out[117]: (54812040, 1522)
A:
<code>
import pandas as pd
import numpy as np
df = pd.DataFrame([[3,1,2],[np.nan,1,2],[np.nan,np.nan,2]],columns=['0','1','2'])
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def justify(a, invalid_val=0, axis=1, side='left'):
if invalid_val is np.nan:
mask = ~np.isnan(a)
else:
mask = a!=invalid_val
justified_mask = np.sort(mask,axis=axis)
if (side=='up') | (side=='left'):
justified_mask = np.flip(justified_mask,axis=axis)
out = np.full(a.shape, invalid_val)
if axis==1:
out[justified_mask] = a[mask]
else:
out.T[justified_mask.T] = a.T[mask.T]
return out
def g(df):
return pd.DataFrame(justify(df.values, invalid_val=np.nan, axis=1, side='left'))
result = g(df.copy())
| {
"problem_id": 44,
"library_problem_id": 44,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Origin",
"perturbation_origin_id": 44
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
def justify(a, invalid_val=0, axis=1, side="left"):
if invalid_val is np.nan:
mask = ~np.isnan(a)
else:
mask = a != invalid_val
justified_mask = np.sort(mask, axis=axis)
if (side == "up") | (side == "left"):
justified_mask = np.flip(justified_mask, axis=axis)
out = np.full(a.shape, invalid_val)
if axis == 1:
out[justified_mask] = a[mask]
else:
out.T[justified_mask.T] = a.T[mask.T]
return out
return pd.DataFrame(justify(df.values, invalid_val=np.nan, axis=1, side="left"))
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
[[3, 1, 2], [np.nan, 1, 2], [np.nan, np.nan, 2]],
columns=["0", "1", "2"],
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "for" not in tokens and "while" not in tokens and "apply" not in tokens
|
Problem:
I have a DataFrame like :
0 1 2
0 0.0 1.0 2.0
1 1.0 2.0 NaN
2 2.0 NaN NaN
What I want to get is
Out[116]:
0 1 2
0 0.0 1.0 2.0
1 Nan 1.0 2.0
2 NaN NaN 2.0
This is my approach as of now.
df.apply(lambda x : (x[x.isnull()].values.tolist()+x[x.notnull()].values.tolist()),1)
Out[117]:
0 1 2
0 0.0 1.0 2.0
1 NaN 1.0 2.0
2 NaN NaN 2.0
Is there any efficient way to achieve this ? apply Here is way to slow .
Thank you for your assistant!:)
My real data size
df.shape
Out[117]: (54812040, 1522)
A:
<code>
import pandas as pd
import numpy as np
df = pd.DataFrame([[3,1,2],[1,2,np.nan],[2,np.nan,np.nan]],columns=['0','1','2'])
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def justify(a, invalid_val=0, axis=1, side='left'):
if invalid_val is np.nan:
mask = ~np.isnan(a)
else:
mask = a!=invalid_val
justified_mask = np.sort(mask,axis=axis)
if (side=='up') | (side=='left'):
justified_mask = np.flip(justified_mask,axis=axis)
out = np.full(a.shape, invalid_val)
if axis==1:
out[justified_mask] = a[mask]
else:
out.T[justified_mask.T] = a.T[mask.T]
return out
def g(df):
return pd.DataFrame(justify(df.values, invalid_val=np.nan, axis=1, side='right'))
result = g(df.copy())
| {
"problem_id": 45,
"library_problem_id": 45,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Semantic",
"perturbation_origin_id": 44
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
def justify(a, invalid_val=0, axis=1, side="left"):
if invalid_val is np.nan:
mask = ~np.isnan(a)
else:
mask = a != invalid_val
justified_mask = np.sort(mask, axis=axis)
if (side == "up") | (side == "left"):
justified_mask = np.flip(justified_mask, axis=axis)
out = np.full(a.shape, invalid_val)
if axis == 1:
out[justified_mask] = a[mask]
else:
out.T[justified_mask.T] = a.T[mask.T]
return out
return pd.DataFrame(
justify(df.values, invalid_val=np.nan, axis=1, side="right")
)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
[[3, 1, 2], [1, 2, np.nan], [2, np.nan, np.nan]],
columns=["0", "1", "2"],
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "for" not in tokens and "while" not in tokens and "apply" not in tokens
|
Problem:
I have a DataFrame like :
0 1 2
0 0.0 1.0 2.0
1 NaN 1.0 2.0
2 NaN NaN 2.0
What I want to get is
Out[116]:
0 1 2
0 NaN NaN 2.0
1 NaN 1.0 2.0
2 0.0 1.0 2.0
This is my approach as of now.
df.apply(lambda x : (x[x.isnull()].values.tolist()+x[x.notnull()].values.tolist()),0)
Out[117]:
0 1 2
0 NaN NaN 2.0
1 NaN 1.0 2.0
2 0.0 1.0 2.0
Is there any efficient way to achieve this ? apply Here is way to slow .
Thank you for your assistant!:)
My real data size
df.shape
Out[117]: (54812040, 1522)
A:
<code>
import pandas as pd
import numpy as np
df = pd.DataFrame([[3,1,2],[np.nan,1,2],[np.nan,np.nan,2]],columns=['0','1','2'])
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def justify(a, invalid_val=0, axis=1, side='left'):
if invalid_val is np.nan:
mask = ~np.isnan(a)
else:
mask = a!=invalid_val
justified_mask = np.sort(mask,axis=axis)
if (side=='up') | (side=='left'):
justified_mask = np.flip(justified_mask,axis=axis)
out = np.full(a.shape, invalid_val)
if axis==1:
out[justified_mask] = a[mask]
else:
out.T[justified_mask.T] = a.T[mask.T]
return out
def g(df):
return pd.DataFrame(justify(df.values, invalid_val=np.nan, axis=0, side='down'))
result = g(df.copy())
| {
"problem_id": 46,
"library_problem_id": 46,
"library": "Pandas",
"test_case_cnt": 1,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 44
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
def justify(a, invalid_val=0, axis=1, side="left"):
if invalid_val is np.nan:
mask = ~np.isnan(a)
else:
mask = a != invalid_val
justified_mask = np.sort(mask, axis=axis)
if (side == "up") | (side == "left"):
justified_mask = np.flip(justified_mask, axis=axis)
out = np.full(a.shape, invalid_val)
if axis == 1:
out[justified_mask] = a[mask]
else:
out.T[justified_mask.T] = a.T[mask.T]
return out
return pd.DataFrame(justify(df.values, invalid_val=np.nan, axis=0, side="down"))
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
[[3, 1, 2], [np.nan, 1, 2], [np.nan, np.nan, 2]],
columns=["0", "1", "2"],
)
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(1):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "for" not in tokens and "while" not in tokens and "apply" not in tokens
|
Problem:
I have a pandas dataframe structured like this:
value
lab
A 50
B 35
C 8
D 5
E 1
F 1
This is just an example, the actual dataframe is bigger, but follows the same structure.
The sample dataframe has been created with this two lines:
df = pd.DataFrame({'lab':['A', 'B', 'C', 'D', 'E', 'F'], 'value':[50, 35, 8, 5, 1, 1]})
df = df.set_index('lab')
I would like to aggregate the rows whose value is smaller that a given threshold: all these rows should be substituted by a single row whose value is the sum of the substituted rows.
For example, if I choose a threshold = 6, the expected result should be the following:
value
lab
A 50
B 35
C 8
X 7 #sum of D, E, F
How can I do this?
I thought to use groupby(), but all the examples I've seen involved the use of a separate column for grouping, so I do not know how to use it in this case.
I can select the rows smaller than my threshold with loc, by doing df.loc[df['value'] < threshold] but I do not know how to sum only these rows and leave the rest of the dataframe unaltered.
A:
<code>
import pandas as pd
df = pd.DataFrame({'lab':['A', 'B', 'C', 'D', 'E', 'F'], 'value':[50, 35, 8, 5, 1, 1]})
df = df.set_index('lab')
thresh = 6
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df, thresh):
return (df[lambda x: x['value'] >= thresh] .append(df[lambda x: x['value'] < thresh].sum().rename('X')))
result = g(df.copy(),thresh)
| {
"problem_id": 47,
"library_problem_id": 47,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 47
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, thresh = data
return df[lambda x: x["value"] >= thresh].append(
df[lambda x: x["value"] < thresh].sum().rename("X")
)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{"lab": ["A", "B", "C", "D", "E", "F"], "value": [50, 35, 8, 5, 1, 1]}
)
df = df.set_index("lab")
thresh = 6
if test_case_id == 2:
df = pd.DataFrame(
{"lab": ["A", "B", "C", "D", "E", "F"], "value": [50, 35, 8, 5, 1, 1]}
)
df = df.set_index("lab")
thresh = 9
return df, thresh
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, thresh = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a pandas dataframe structured like this:
value
lab
A 50
B 35
C 8
D 5
E 1
F 1
This is just an example, the actual dataframe is bigger, but follows the same structure.
The sample dataframe has been created with this two lines:
df = pd.DataFrame({'lab':['A', 'B', 'C', 'D', 'E', 'F'], 'value':[50, 35, 8, 5, 1, 1]})
df = df.set_index('lab')
I would like to aggregate the rows whose value is bigger than a given threshold: all these rows should be substituted by a single row whose value is the average of the substituted rows.
For example, if I choose a threshold = 6, the expected result should be the following:
value
lab
value
lab
D 5.0
E 1.0
F 1.0
X 31.0#avg of A, B, C
How can I do this?
I thought to use groupby(), but all the examples I've seen involved the use of a separate column for grouping, so I do not know how to use it in this case.
I can select the rows smaller than my threshold with loc, by doing df.loc[df['value'] < threshold] but I do not know how to sum only these rows and leave the rest of the dataframe unaltered.
A:
<code>
import pandas as pd
df = pd.DataFrame({'lab':['A', 'B', 'C', 'D', 'E', 'F'], 'value':[50, 35, 8, 5, 1, 1]})
df = df.set_index('lab')
thresh = 6
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df, thresh):
return (df[lambda x: x['value'] <= thresh]
.append(df[lambda x: x['value'] > thresh].mean().rename('X')))
result = g(df.copy(),thresh)
| {
"problem_id": 48,
"library_problem_id": 48,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Semantic",
"perturbation_origin_id": 47
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, thresh = data
return df[lambda x: x["value"] <= thresh].append(
df[lambda x: x["value"] > thresh].mean().rename("X")
)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{"lab": ["A", "B", "C", "D", "E", "F"], "value": [50, 35, 8, 5, 1, 1]}
)
df = df.set_index("lab")
thresh = 6
if test_case_id == 2:
df = pd.DataFrame(
{"lab": ["A", "B", "C", "D", "E", "F"], "value": [50, 35, 8, 5, 1, 1]}
)
df = df.set_index("lab")
thresh = 9
return df, thresh
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, thresh = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
I have a pandas dataframe structured like this:
value
lab
A 50
B 35
C 8
D 5
E 1
F 1
This is just an example, the actual dataframe is bigger, but follows the same structure.
The sample dataframe has been created with this two lines:
df = pd.DataFrame({'lab':['A', 'B', 'C', 'D', 'E', 'F'], 'value':[50, 35, 8, 5, 1, 1]})
df = df.set_index('lab')
I would like to aggregate the rows whose value is in not a given section: all these rows should be substituted by a single row whose value is the average of the substituted rows.
For example, if I choose a [4,38], the expected result should be the following:
value
lab
B 35
C 8
D 5
X 17.333#average of A,E,F
A:
<code>
import pandas as pd
df = pd.DataFrame({'lab':['A', 'B', 'C', 'D', 'E', 'F'], 'value':[50, 35, 8, 5, 1, 1]})
df = df.set_index('lab')
section_left = 4
section_right = 38
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df, section_left, section_right):
return (df[lambda x: x['value'].between(section_left, section_right)]
.append(df[lambda x: ~x['value'].between(section_left, section_right)].mean().rename('X')))
result = g(df.copy(),section_left, section_right)
| {
"problem_id": 49,
"library_problem_id": 49,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Difficult-Rewrite",
"perturbation_origin_id": 47
} | import pandas as pd
import numpy as np
import copy
def generate_test_case(test_case_id):
def generate_ans(data):
data = data
df, section_left, section_right = data
return df[lambda x: x["value"].between(section_left, section_right)].append(
df[lambda x: ~x["value"].between(section_left, section_right)]
.mean()
.rename("X")
)
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame(
{"lab": ["A", "B", "C", "D", "E", "F"], "value": [50, 35, 8, 5, 1, 1]}
)
df = df.set_index("lab")
section_left = 4
section_right = 38
if test_case_id == 2:
df = pd.DataFrame(
{"lab": ["A", "B", "C", "D", "E", "F"], "value": [50, 35, 8, 5, 1, 1]}
)
df = df.set_index("lab")
section_left = 6
section_right = 38
return df, section_left, section_right
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df, section_left, section_right = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
|
Problem:
Sample dataframe:
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
I'd like to add inverses of each existing column to the dataframe and name them based on existing column names with a prefix, e.g. inv_A is an inverse of column A and so on.
The resulting dataframe should look like so:
result = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "inv_A": [1/1, 1/2, 1/3], "inv_B": [1/4, 1/5, 1/6]})
Obviously there are redundant methods like doing this in a loop, but there should exist much more pythonic ways of doing it and after searching for some time I didn't find anything. I understand that this is most probably a duplicate; if so, please point me to an existing answer.
A:
<code>
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| def g(df):
return df.join(df.apply(lambda x: 1/x).add_prefix('inv_'))
result = g(df.copy())
| {
"problem_id": 50,
"library_problem_id": 50,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Origin",
"perturbation_origin_id": 50
} | import pandas as pd
import numpy as np
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.join(df.apply(lambda x: 1 / x).add_prefix("inv_"))
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
if test_case_id == 2:
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "while" not in tokens and "for" not in tokens
|
Problem:
Sample dataframe:
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
I'd like to add exponentials of each existing column to the dataframe and name them based on existing column names with a prefix, e.g. exp_A is an exponential of column A and so on.
The resulting dataframe should look like so:
result = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "exp_A ": [e^1, e^2, e^3], "exp_B ": [e^4, e^5, e^6]})
Notice that e is the natural constant.
Obviously there are redundant methods like doing this in a loop, but there should exist much more pythonic ways of doing it and after searching for some time I didn't find anything. I understand that this is most probably a duplicate; if so, please point me to an existing answer.
A:
<code>
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
</code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code>
| import math
def g(df):
return df.join(df.apply(lambda x: math.e**x).add_prefix('exp_'))
result = g(df.copy())
| {
"problem_id": 51,
"library_problem_id": 51,
"library": "Pandas",
"test_case_cnt": 2,
"perturbation_type": "Semantic",
"perturbation_origin_id": 50
} | import pandas as pd
import numpy as np
import math
import copy
import tokenize, io
def generate_test_case(test_case_id):
def generate_ans(data):
df = data
return df.join(df.apply(lambda x: math.e**x).add_prefix("exp_"))
def define_test_input(test_case_id):
if test_case_id == 1:
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
if test_case_id == 2:
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
return df
test_input = define_test_input(test_case_id)
expected_result = generate_ans(copy.deepcopy(test_input))
return test_input, expected_result
def exec_test(result, ans):
try:
pd.testing.assert_frame_equal(result, ans, check_dtype=False)
return 1
except:
return 0
exec_context = r"""
import pandas as pd
import numpy as np
df = test_input
[insert]
"""
def test_execution(solution: str):
code = exec_context.replace("[insert]", solution)
for i in range(2):
test_input, expected_result = generate_test_case(i + 1)
test_env = {"test_input": test_input}
exec(code, test_env)
assert exec_test(test_env["result"], expected_result)
def test_string(solution: str):
tokens = []
for token in tokenize.tokenize(io.BytesIO(solution.encode("utf-8")).readline):
tokens.append(token.string)
assert "while" not in tokens and "for" not in tokens
|
End of preview. Expand
in Dataset Viewer.
DS-1000 in simplified format
🔥 Check the leaderboard from Eval-Arena on our project page.
See testing code and more information (also the original fill-in-the-middle/Insertion format) in the DS-1000 repo.
Reformatting credits: Yuhang Lai, Sida Wang
- Downloads last month
- 636