date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/22 | 1,547 | 4,364 | <issue_start>username_0: I have a JavaScript code like so:
```
var myArray = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20];
for (var i = 0, di = 1; i >= 0; i += di) {
if (i == myArray.length - 1) { di = -1; }
document.writeln(myArray[i]);
}
```
I need it to stop right in the middle like 10 and from 10 starts counting down to 0 back.
So far, I've managed to make it work from 0 to 20 and from 20 - 0.
How can I stop it in a middle and start it from there back?
Please help anyone!<issue_comment>username_1: Just divide your array length by 2
```js
var myArray = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20];
for (var i = 0, di = 1; i >= 0; i += di) {
if (i == ((myArray.length / 2) -1 )) { di = -1; }
document.writeln(myArray[i]);
}
```
Upvotes: 1 <issue_comment>username_2: Could [Array.reverse()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reverse) help you in this matter?
```js
const array = [0,1,3,4,5,6,7,8,9,10,11,12,13,14,15]
const getArrayOfAmount = (array, amount) => array.filter((item, index) => index < amount)
let arraySection = getArrayOfAmount(array, 10)
let reversed = [...arraySection].reverse()
console.log(arraySection)
console.log(reversed)
```
And then you can "do stuff" with each array with watever array manipulation you desire.
Upvotes: 0 <issue_comment>username_3: Here is an example using a function which accepts the array and the number of items you want to display forwards and backwards:
```
var myArray = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20];
if(myArray.length === 1){
ShowXElementsForwardsAndBackwards(myArray, 1);
}
else if(myArray.length === 0) {
//Do nothing as there are no elements in array and dividing 0 by 2 would be undefined
}
else {
ShowXElementsForwardsAndBackwards(myArray, (myArray.length / 2));
}
function ShowXElementsForwardsAndBackwards(mYarray, numberOfItems){
if (numberOfItems >= mYarray.length) {
throw "More Numbers requested than length of array!";
}
for(let x = 0; x < numberOfItems; x++){
document.writeln(mYarray[x]);
}
for(let y = numberOfItems - 1; y >= 0; y--){
document.writeln(mYarray[y]);
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_4: Couldn’t you just check if you’ve made it halfway and then subtract your current spot from the length?
```
for(i = 0; i <= myArray.length; i++){
if( Math.round(i/myArray.length) == 1 ){
document.writeln( myArray[ myArray.length - i] );
} else {
document.writeln( myArray[i] );
}
}
```
Unless I’m missing something?
Upvotes: 0 <issue_comment>username_5: If you capture the midpoint ( half the length of the array ), just start working your step in the opposite direction.
```js
const N = 20;
let myArray = [...Array(N).keys()];
let midpoint = Math.round(myArray.length/2)
for ( let i=1, step=1; i; i+=step) {
if (i === midpoint)
step *= -1
document.writeln(myArray[i])
}
```
To make things clearer, I've:
* Started the loop iterator variable (`i`) at 1; this also meant the array has an unused `0` value at `0` index; in other words, `myArray[0]==0` that's never shown
* Set the the loop terminating condition to `i`, which means when `i==0` the loop will stop because it is *falsy*
* Renamed the `di` to `step`, which is more consistent with other terminology
* The `midpoint` uses a `Math.round()` to ensure it's the highest integer (midpoint) (e.g., `15/2 == 7.5` but you want it to be 8 )
* The `midpoint` is a variable for performance reasons; calculating the midpoint in the loop body is redundant and less efficient since it only needs to be calculated once
* For practical purpose, made sizing the array dynamic using `N`
* Updated to ES6/ES7 -- this is now non-Internet Explorer-friendly [it won't work in IE ;)] primarily due to the use of the spread operator (`...`) ... but that's easily avoidable
Upvotes: 0 <issue_comment>username_6: You could move the checking into the condition block of the for loop.
```js
var myArray = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20];
for (
var i = 0, l = (myArray.length >> 1) - 1, di = 1;
i === l && (di = -1), i >= 0;
i += di
) {
document.writeln(myArray[i]);
}
```
Upvotes: 0 |
2018/03/22 | 613 | 2,099 | <issue_start>username_0: I use Eclipse for Python development and depend on the F2 function key to send lines of code to the console. Recently, F2 has stopped working in my installation of Eclipse Neon. I have tried everything I can think of to get it to work again:
* close and reopen the python module
* close and reopen Eclipse (as recommended on Stackoverflow)
* check the key bindings to make sure F2 is properly bound, unbind it, rebind it, reset to default key bindings
* reboot my computer
* install new version of Eclipse, Oxygen, twice..
In one of the newly installed Oxygens, at least the first time I press F2, it does open the pop-up asking what console to start with, but then after the console is open it does nothing.
In the Neon installation and the other Oxygen installation, F2 just does nothing, not even open a new console when none is active.
Would you have any idea I can try to get F2 back to work?<issue_comment>username_1: have the same issues after upgrading to
Eclipse IDE for C/C++ Developers
Version: Oxygen.3 Release (4.7.3)
Build id: 20180308-1800
PyDev for Eclipse 6.3.2.201803171248 org.python.pydev.feature.feature.group username_3
Upvotes: -1 <issue_comment>username_2: I have gotten it back to work, by uninstalling pydev 6.3 and re-installing pydev 6.1. I'm not sure why this works as I was working in 6.2 when the issue first arose. I tried to solve it by upgrading to 6.3, but that didn't work. For some reason, downgrading back to 6.1 now makes it work again.
Upvotes: 1 <issue_comment>username_3: This was a racing condition (it was present on previous versions of PyDev, but became more apparent in PyDev 6.2/6.3 due to unrelated changes).
I fixed it at: <https://github.com/fabioz/Pydev/commit/083658f789e2f27f39c4fa6a431ab97371dd4244>... so, should be fixed for 6.4 (note that the release is around 3 weeks away).
Upvotes: 3 <issue_comment>username_4: I fixed it by installing the updates for PyDev. In Eclipse, Help - Check for updates - install pending updates for PyDev. Now I can use F2 to run code from selection in the console.
Upvotes: 0 |
2018/03/22 | 521 | 1,707 | <issue_start>username_0: ```
line = int(input("How many items are in the chart?: "))
for i in range(line + 1):
for j in range(line):
number = int(input("How much of this specific item?: "))
_star = "*"
print(_star * number)
break
```
With this code I am trying to take user input for each line to print the specific amount of items on each line. However the code only takes the last input for the specific item and prints that one line. What am I overlooking to print each line for the specified input?<issue_comment>username_1: have the same issues after upgrading to
Eclipse IDE for C/C++ Developers
Version: Oxygen.3 Release (4.7.3)
Build id: 20180308-1800
PyDev for Eclipse 6.3.2.201803171248 org.python.pydev.feature.feature.group username_3
Upvotes: -1 <issue_comment>username_2: I have gotten it back to work, by uninstalling pydev 6.3 and re-installing pydev 6.1. I'm not sure why this works as I was working in 6.2 when the issue first arose. I tried to solve it by upgrading to 6.3, but that didn't work. For some reason, downgrading back to 6.1 now makes it work again.
Upvotes: 1 <issue_comment>username_3: This was a racing condition (it was present on previous versions of PyDev, but became more apparent in PyDev 6.2/6.3 due to unrelated changes).
I fixed it at: <https://github.com/fabioz/Pydev/commit/083658f789e2f27f39c4fa6a431ab97371dd4244>... so, should be fixed for 6.4 (note that the release is around 3 weeks away).
Upvotes: 3 <issue_comment>username_4: I fixed it by installing the updates for PyDev. In Eclipse, Help - Check for updates - install pending updates for PyDev. Now I can use F2 to run code from selection in the console.
Upvotes: 0 |
2018/03/22 | 271 | 1,069 | <issue_start>username_0: What I want to happen is when you run into a cube, the game resets, but this is not working. I can't even get it to log anything in the console when it collides. I have a rigidbody and a collider on both objects, but they still don't work.
```
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.SceneManagement;
public class collisionScript : MonoBehaviour {
void OnCollisonEnter(Collision hit)
{
if (hit.collider.tag == ("Wall"))
{
SceneManager.LoadScene("gameover");
}
}
}
```<issue_comment>username_1: You have it wrong written
It's `OnCollisionEnter`
Also:
- Check that the moving gameObject has a RigidBody component
- The object which is the wall has the tag "Wall"
Upvotes: 0 <issue_comment>username_2: You made an error on method name, you call it `OnCollisonEnter()` while it should be `OnCollisionEnter()`. Unfortunately but correctly this kind of errors aren't reported by IDE because you may want to create a method with that name.
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,510 | 5,456 | <issue_start>username_0: I'm relatively new to pl/sql and i'm trying to make a list with records objects but i dont know how to initialize for each item of the list both fields from record item. For example : in procedure "new item" how i can initialize example(1) ? with example(1).id\_std := integer and example(1).procent := integer ? Thanks!
This is how my code looks like :
```
set serveroutput on;
CREATE OR REPLACE PACKAGE newExercise IS
TYPE item IS RECORD(
id_std INTEGER,
procent INTEGER
);
TYPE tabel IS VARRAY(5) OF item;
PROCEDURE newItem (example tabel);
example2 tabel := tabel();
end newExercise;
/
CREATE OR REPLACE PACKAGE BODY newExercise IS
PROCEDURE newItem (example tabel) IS
BEGIN
FOR i IN 1..example.LIMIT LOOP
DBMS_OUTPUT.PUT_LINE(example(i));
end loop;
end newItem;
end newExercise;
/
```<issue_comment>username_1: Record types are for storing the results of queries. So you could do this:
```
declare
recs newExercise.tabel;
begin
select level, level * 0.25
bulk collect into recs
from dual
connect by level <= 5;
newExercise.newItem (recs);
end;
/
```
Note that VARRAY is not a suitable collection type for this purpose, because it's not always possible to predict how many rows a query will return. It's better to use
```
TYPE tabel IS table OF item;
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: When you refer to the record you usually have to specify specific fields. This populates the records with calculated values; to be able to do that I've had to changed the procedure argument from the default `IN` direction to `IN OUT`, both in the specification:
```
CREATE OR REPLACE PACKAGE newExercise IS
TYPE item IS RECORD(
id_std INTEGER,
procent INTEGER
);
TYPE tabel IS VARRAY(5) OF item;
PROCEDURE newItem (example IN OUT tabel);
-- ^^^^^^ make in/out to be updateable
-- example2 tabel := tabel(); -- not used
END newExercise;
/
```
and in the body:
```
CREATE OR REPLACE PACKAGE BODY newExercise IS
PROCEDURE newItem (example IN OUT tabel) IS
-- ^^^^^^ make in/out to be updateable
BEGIN
FOR i IN 1..example.LIMIT LOOP
-- extend collection to create new record
example.extend();
-- assign values to record fields
example(i).id_std := i;
example(i).procent := 100 * (1/i);
END LOOP;
END newItem;
END newExercise;
/
```
The `LIMIT` is five, from the definition, but the varray instance is initially empty (from `tabel()`). For population you can loop from 1 to that limit of five, but you have to `extend()` the collection to actually create the record in that position. Records are created with all fields set to null by default. You can then assign values to the fields of each record. (I've just made something up, obviously).
You can then test that with an anonymous block:
```
declare
example newExercise.tabel := newExercise.tabel();
begin
-- call procedure
newExercise.newItem(example);
-- display contents for debuggibg
FOR i IN 1..example.COUNT LOOP
DBMS_OUTPUT.PUT_LINE('Item ' || i
|| ' id_std: ' || example(i).id_std
-- ^^^^^^^ refer to field
|| ' procent: ' || example(i).procent);
-- ^^^^^^^ refer to field
END LOOP;
end;
/
Item 1 id_std: 1 procent: 100
Item 2 id_std: 2 procent: 50
Item 3 id_std: 3 procent: 33
Item 4 id_std: 4 procent: 25
Item 5 id_std: 5 procent: 20
PL/SQL procedure successfully completed.
```
I've put the original loop to display the contents of the array in that block, as you wouldn't generally have that as part of a procedure. You could still use `LIMIT` for that loop, but `COUNT` is safer in case the procedure doesn't fully populate it.
You can also extend once before the loop:
```
PROCEDURE newItem (example IN OUT tabel) IS
BEGIN
-- extend collection to create all new records
example.extend(example.LIMIT);
FOR i IN 1..example.LIMIT LOOP
example(i).id_std := i;
example(i).procent := 100 * (1/i);
END LOOP;
END newItem;
```
If you already know the values you want to assign - and they aren't coming from a table, in which case you'd use username_1's approach - you can just assign to the last created record; this is a rather contrived example:
```
PROCEDURE newItem (example IN OUT tabel) IS
BEGIN
example.extend(); -- first record
example(example.LAST).id_std := 1;
example(example.LAST).procent := 7;
example.extend(); -- second record, left with null fields
example.extend(); -- third record
example(example.LAST).id_std := 3;
example(example.LAST).procent := 21;
example.extend(); -- fourth record, left with null fields
END newItem;
```
and the same anonymous block now gives:
```
Item 1 id_std: 1 procent: 7
Item 2 id_std: procent:
Item 3 id_std: 3 procent: 21
Item 4 id_std: procent:
PL/SQL procedure successfully completed.
```
Notice the null values, and that there is no 5th row.
Or again extend the collection once, and refer to the numbered records directly:
```
PROCEDURE newItem (example IN OUT tabel) IS
BEGIN
example.extend(4);
example(1).id_std := 1;
example(1).procent := 7;
example(3).id_std := 3;
example(3).procent := 21;
END newItem;
```
which gets the same result from the anonymous block.
Upvotes: 0 |
2018/03/22 | 525 | 1,546 | <issue_start>username_0: I have a title tag and I want to make a small border under the title.
I use :after to do this :
```
h1:after{
content: '';
display: block;
height: 4px;
width: 60px;
margin: 9px 0 0 2px;
color:#fff;
}
```
I want the border on the left of the title, it's ok when title is align left, but when i center title I can't have the border exactly on left (responsive), if I use a margin:0 auto; the border is on the center of the title.
I have this :
![Actual result](https://i.stack.imgur.com/zXgvp.png)
I want this :
![Exepted result](https://i.stack.imgur.com/aH1aG.png)
Any ideas ?
Thank you !<issue_comment>username_1: If you make your h1 inline-block, you can achieve what you are after:
```css
body {
text-align:center; /* this needs to be on the parent of the h1 */
}
h1 {
/* make this inline-block so it is only as long as the text */
display: inline-block;
}
h1:after {
content: '';
display: block;
height: 4px;
width: 60px;
margin: 9px 0 0 2px;
background: green;
}
```
```html
Nouveautés
==========
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you need the H1 to stay block, place the text of the H1 within a span like this and you can add the psuedo element to the span instead.
```css
h1 {
text-align: center;
}
h1 span {
position: relative;
}
h1 span:after {
background: red;
content: '';
display: block;
height: 4px;
position: absolute;
left: 0;
width: 60px;
}
```
```html
Testing
=======
```
Upvotes: 2 |
2018/03/22 | 1,039 | 3,000 | <issue_start>username_0: I write the below assembler code, and it can build pass by as and ld directly.
```
as cpuid.s -o cpuid.o
ld cpuid.o -o cpuid
```
But when I used gcc to do the whole procedure. I meet the below error.
```
$ gcc cpuid.s -o cpuid
/tmp/cctNMsIU.o: In function `_start':
(.text+0x0): multiple definition of `_start'
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/Scrt1.o:(.text+0x0): first defined here
/usr/bin/ld: /tmp/cctNMsIU.o: relocation R_X86_64_32 against `.data' can not be used when making a shared object; recompile with -fPIC
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/Scrt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
/usr/bin/ld: final link failed: Invalid operation
collect2: error: ld returned 1 exit status
```
Then I modify \_start to main, and also add -fPIC to gcc parameter. But it doesn't fix my ld error. the error msg is changed to below.
```
$ gcc cpuid.s -o cpuid
/usr/bin/ld: /tmp/ccYCG80T.o: relocation R_X86_64_32 against `.data' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: error: ld returned 1 exit status
```
I don't understand the meaning for that due to I don't make a shared object. I just want to make an executable binary.
```
.section .data
output:
.ascii "The processor Vendor ID is 'xxxxxxxxxxxx'\n"
.section .text
.global _start
_start:
movl $0, %eax
cpuid
movl $output, %edi
movl %ebx, 28(%edi)
movl %edx, 32(%edi)
movl %ecx, 36(%edi)
movl $4, %eax
movl $1, %ebx
movl $output, %ecx
movl $42, %edx
int $0x80
movl $1, %eax
movl $0, %ebx
int $0x80
```
**If i modify the above code to below, whether it is correct or having some side effect on 64bit asm programming ?**
```
.section .data
output:
.ascii "The processor Vendor ID is 'xxxxxxxxxxxx'\n"
.section .text
.global main
main:
movq $0, %rax
cpuid
lea output(%rip), %rdi
movl %ebx, 28(%rdi)
movl %edx, 32(%rdi)
movl %ecx, 36(%rdi)
movq %rdi, %r10
movq $1, %rax
movq $1, %rdi
movq %r10, %rsi
movq $42, %rdx
syscall
```<issue_comment>username_1: As comments have noted, you could work around this by linking your program as non-PIE, but it would be better to fix your asm to be position-independent. If it's 32-bit x86 code that's a bit ugly. This instruction:
```
movl $output, %edi
```
would become:
```
call 1f
1: pop %edi
add $output-1b, %edi
```
for 64-bit it's much cleaner. Instead of:
```
movq $output, %rdi
```
you'd write:
```
lea output(%rip), %rdi
```
Upvotes: 3 <issue_comment>username_2: With NASM I fixed this by putting the line "DEFAULT REL" in the source file (check [nasmdoc.pdf](http://www.nasm.us/xdoc/2.14.02/nasmdoc.pdf) p.76).
Upvotes: 2 |
2018/03/22 | 678 | 2,243 | <issue_start>username_0: I'm using Bootstrap 4. I'd like the navbar-brand item (which is just a text element) to be on the far right. The menu (as normal) defaults to the left.
I've tried applying `ml-auto`, `mx-auto`, `mr-auto`, `pull-right`, etc. Nothing does what I want.
`mx-auto` was nice for the small screen. It put the navbar-brand centered when the hamburger menu is there. However, I need something that works when the regular menu is there.
Here is my code:
```
[BSB Feedback](#)
*
* [Give Feedback](/)
* [Manage Feedback](/managefeedback/)
```<issue_comment>username_1: >
> how to put the navbar-brand on the right?
>
>
>
Add the `order-md-last` class to it (additionally to `mx-auto` that you experimented with).
That ordering class re-orders the element on screens that are medium (`md`) or larger. On smaller screens, no re-ordering happens and your `mx-auto` class gets applied.
Here's the code snippet with that order class applied:
```html
[BSB Feedback](#)
*
* [Give Feedback](/)
* [Manage Feedback](/managefeedback/)
```
Upvotes: 0 <issue_comment>username_2: You can use the `order-last` class. *However*, you'll probably want the brand to be still first/top on mobile screens, so you can **use order responsively** like this...
**`navbar-brand order-md-last`**
<https://www.codeply.com/go/Vq7ajCEfsg>
```
[BSB Feedback](#)
*
* [Give Feedback](/)
* [Manage Feedback](/managefeedback/)
```
[More on Bootstrap ordering](http://getbootstrap.com/docs/4.0/utilities/flex/#order)
An alternate option is to use **`flex-row-reverse`** [responsively](http://getbootstrap.com/docs/4.0/utilities/flex/#direction) on the parent navbar. This will switch the order of the brand and nav links, but only on the non-mobile menu.
```
...
```
And, if you want to keep the **brand and toggler centered on mobile**, you can wrap them in another div and still center with `mx-auto`: <https://www.codeply.com/go/xXBdCHGAAN>
---
**Related:**
[Bootstrap 4 align navbar items to the right](https://stackoverflow.com/questions/41513463/bootstrap-4-align-navbar-items-to-the-right)
Upvotes: 4 [selected_answer]<issue_comment>username_3: Use .justify-content-md-end
```html
[Brand](#)
```
Upvotes: 1 |
2018/03/22 | 589 | 1,442 | <issue_start>username_0: I have a coordinate text file and I wish to read it into a regionprop. I wish to use the regionprop for few analysis like Centroid. How can I do it?
Code:
```
filename = fullfile('E:/outline.txt');
fileID = fopen(filename);
C = textscan(fileID,'%d %d');
fclose(fileID);
stats = regionprops(C,'Centroid')
```
coordinate text file content is as follow:
```
88 10
87 11
87 12
88 13
88 14
92 21
93 22
93 23
94 24
95 25
100 33
101 34
102 34
103 34
103 33
103 32
103 31
103 30
103 29
103 28
103 27
102 26
102 25
101 24
101 23
100 22
100 21
100 20
99 19
99 18
94 12
93 12
92 12
91 11
90 11
89 10
88 10
```<issue_comment>username_1: You can find [`roipoly`](https://www.mathworks.com/help/images/ref/roipoly.html?requestedDomain=true) useful: this allows you to convert a list of 2d points/polygon vertices into a binary mask.
The resulting binary mask can then be fed to `regionprops`
Upvotes: 0 <issue_comment>username_2: Why don't you just use `centroid`, which was introduced in 2017b?
```
[x,y] = centroid(C);
```
If you are insistent on regionprops (which is slower, and less accurate than operating on the polygon directly) then you are misunderstanding how region props works. Region props works on images. You need to first create an image, then pass the image to region props.
```
bw = roipoly(zeros(120), C(:,1), C(:,2));
stats = regionprops(bw);
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 5,419 | 13,321 | <issue_start>username_0: I'm running two different SQL queries and getting vastly different results:
The Tables:
-----------
*mips*: This table is indexed on `time` and contains a `*_good` and `*_bad` field for each "metric" that I measure (round-trip time, re-transmitted bytes, etc). The fields are: `time`, `rtt_good`, `rtt_bad`, `rexb_good`, `rexb_bad`, `nae_good`, `nae_bad`, etc.
*metrics*: This table is indexed on `time`, `asn` (the network to which we delivered traffic), `cty` (the country in which we delivered that traffic), and `source` (the data center from which we delivered that traffic). So for a single "time" we have hundreds of thousands of rows. Each row tells us the total number of requests served (`reqs`), and various measured metrics about the traffic delivery (`rtt`, `rexb`, `nae`, etc)
The two tables are joined on the `time` column, which contains a UNIX timestamp. All other values are floats.
Goal
----
Given the `rtt_good` (a value for round-trip time which we deem is "good", like 10ms), the `rtt_bad` (a value for round-trip time which we deem is "bad", like 5 seconds), and the `rtt` we can perform linear interpolation to provide a measure of "how good" or "how bad" the RTT is:
```
rtt_mips = (rtt - rtt_good) / (rtt_bad - rtt_good)
```
Since we have data for every possible `asn`, `cty`, and `source` - we often need to aggregate this data to answer more generic questions like "How does our RTT look in Mexico?". When aggregating, we perform a weighted average of the metric - weighted by the number of requests we serviced. For instance, the average RTT in Mexico would be:
```
select sum(rtt * reqs) / sum(reqs) as avg_rtt from metrics where cty = "mx"
```
Now **the issue** is that we don't always service every ASN in every country from every data source at every 5 minute interval. We may have a span of time where our Japan data center isn't serving **any** data to Mexico. This means that when we group these metrics by time, we have a lot of `NULL` rows:
```
+------+---------+
| time | avg_rtt |
+------+---------+
| 1 | 300 |
| 2 | NULL |
| 3 | 400 |
| ... | ... |
```
To fix this, I wish to take the "last known" RTT and copy it down to the next row before computing the "relative good-ness" of the RTT:
```
+------+---------+------------+----------+---------+----------+
| time | avg_rtt | last_known | rtt_good | rtt_bad | rtt_mips |
+------+---------+------------+----------+---------+----------+
| 1 | 300 | 300 | 10 | 5000 | math |
| 2 | NULL | 300 | 10 | 5000 | math |
| 3 | 400 | 400 | 10 | 5000 | math |
| ... | ... | ... | ... | ... | ... |
```
This can be accomplished with a combination of MySQL variables and `COALESCE` like so:
```
select @rtt := coalesce(rtt, @rtt) from metrics
```
If `rtt` is not `NULL`, we use `rtt`. If `rtt` **is** `NULL`, we use the `@rtt` variable which came from the previous row
Put all of that together, and you get **query 1**, below.
However I intend to use the output of this to draw graphs in JavaScript, so I wanted to multiply the `time` column by `1000` (to convert seconds to milliseconds). This results in **query 2**, which had different (and unexpected) behavior.
Query 1:
--------
```
select
mips.time,
@rtt := coalesce(sum(rtt*reqs)/sum(reqs), @rtt) as rtt,
(coalesce(sum(rtt*reqs)/sum(reqs), @rtt) - rtt_good) / (rtt_bad - rtt_good) as rtt_mips
from
mips
left join
(
select * from metrics where asn = '33095' and cty = 'us'
) t1 on mips.time = t1.time
group by time
order by time asc;
```
Result:
```
+------------+-----------------+----------------------+
| time | rtt | rtt_mips |
+------------+-----------------+----------------------+
| 1521731100 | NULL | NULL |
| 1521731400 | NULL | NULL |
| 1521731700 | 12593 | 0.04197666666666667 |
| 1521732000 | 12593 | 0.04197666666666667 |
| 1521732300 | 12593 | 0.04197666666666667 |
| 1521732600 | 12593 | 0.04197666666666667 |
| 1521732900 | 41266.90234375 | 0.13755633333333334 |
| 1521733200 | 41266.90234375 | 0.13755634114583334 |
| 1521733500 | 41266.90234375 | 0.13755634114583334 |
| 1521733800 | 41266.90234375 | 0.13755634114583334 |
| 1521734100 | 41266.90234375 | 0.13755634114583334 |
| 1521734400 | 41266.90234375 | 0.13755634114583334 |
| 1521734700 | 41266.90234375 | 0.13755634114583334 |
| 1521735000 | 14979.439453125 | 0.049931333333333335 |
| 1521735300 | 11812.119140625 | 0.03937366666666667 |
| 1521735600 | 11812.119140625 | 0.03937373046875 |
| 1521735900 | 8738.2314453125 | 0.02912743333333333 |
| 1521736200 | 8738.2314453125 | 0.029127438151041667 |
| 1521736500 | 8738.2314453125 | 0.029127438151041667 |
| 1521736800 | 8738.2314453125 | 0.029127438151041667 |
+------------+-----------------+----------------------+
20 rows in set (0.22 sec)
```
Query 2:
--------
```
select
mips.time * 1000 as time, -- The only line that changed
@rtt := coalesce(sum(rtt*reqs)/sum(reqs), @rtt) as rtt,
(coalesce(sum(rtt*reqs)/sum(reqs), @rtt) - rtt_good) / (rtt_bad - rtt_good) as rtt_mips
from
mips
left join
(
select * from metrics where asn = '33095' and cty = 'us'
) t1 on mips.time = t1.time
group by time
order by time asc;
```
Result:
```
+---------------+-----------------+----------------------+
| time | rtt | rtt_mips |
+---------------+-----------------+----------------------+
| 1521731100000 | NULL | NULL |
| 1521731400000 | NULL | NULL |
| 1521731700000 | 12593 | 0.04197666666666667 |
| 1521732000000 | NULL | NULL |
| 1521732300000 | NULL | NULL |
| 1521732600000 | NULL | NULL |
| 1521732900000 | 41266.90234375 | 0.13755633333333334 |
| 1521733200000 | NULL | NULL |
| 1521733500000 | NULL | NULL |
| 1521733800000 | NULL | NULL |
| 1521734100000 | NULL | NULL |
| 1521734400000 | NULL | NULL |
| 1521734700000 | NULL | NULL |
| 1521735000000 | 14979.439453125 | 0.049931333333333335 |
| 1521735300000 | 11812.119140625 | 0.03937366666666667 |
| 1521735600000 | NULL | NULL |
| 1521735900000 | 8738.2314453125 | 0.02912743333333333 |
| 1521736200000 | NULL | NULL |
| 1521736500000 | NULL | NULL |
| 1521736800000 | NULL | NULL |
+---------------+-----------------+----------------------+
20 rows in set (0.41 sec)
```
Question:
---------
Why is it that when I change the `time` column to be `time * 1000` my variable stops getting set properly and my query starts returning `NULL`s?
Version info:
-------------
```
mysql> select version();
+-----------------+
| version() |
+-----------------+
| 10.1.26-MariaDB |
+-----------------+
1 row in set (0.10 sec)
```
Response to @whoami
-------------------
First, the results of the following query:
```
mysql> select * from mips where time = 1521731700000;
Empty set (0.15 sec)
```
And a similar query:
```
mysql> select * from mips where time = 1521731700;
+------------+----------+---------+-----------+----------+----------+---------+-----------+----------+---------+--------+---------+--------+
| time | rtt_good | rtt_bad | rexb_good | rexb_bad | nae_good | nae_bad | util_good | util_bad | fb_good | fb_bad | or_good | or_bad |
+------------+----------+---------+-----------+----------+----------+---------+-----------+----------+---------+--------+---------+--------+
| 1521731700 | 0 | 300000 | 0 | 40 | 25 | 100 | 0 | 80 | 0 | 100 | 0 | 100 |
+------------+----------+---------+-----------+----------+----------+---------+-----------+----------+---------+--------+---------+--------+
1 row in set (0.10 sec)
```
Then I tried grouping by `rtt_good` and `rtt_bad`, as well as multiplying the `time` column for `metrics` by 1000
Query:
```
select
mips.time * 1000 as time,
@rtt := coalesce(sum(rtt*reqs)/sum(reqs), @rtt) as rtt,
(coalesce(sum(rtt*reqs)/sum(reqs), @rtt) - rtt_good) / (rtt_bad - rtt_good) as rtt_mips
from
mips
left join
(
select time * 1000 as time, rtt, reqs from metrics where asn = '33095' and cty = 'us'
) t1 on mips.time = t1.time
group by time, rtt_good, rtt_bad
order by time asc;
```
Result:
```
+---------------+------+----------+
| time | rtt | rtt_mips |
+---------------+------+----------+
| 1521731100000 | NULL | NULL |
| 1521731400000 | NULL | NULL |
| 1521731700000 | NULL | NULL |
| 1521732000000 | NULL | NULL |
| 1521732300000 | NULL | NULL |
| 1521732600000 | NULL | NULL |
| 1521732900000 | NULL | NULL |
| 1521733200000 | NULL | NULL |
| 1521733500000 | NULL | NULL |
| 1521733800000 | NULL | NULL |
| 1521734100000 | NULL | NULL |
| 1521734400000 | NULL | NULL |
| 1521734700000 | NULL | NULL |
| 1521735000000 | NULL | NULL |
| 1521735300000 | NULL | NULL |
| 1521735600000 | NULL | NULL |
| 1521735900000 | NULL | NULL |
| 1521736200000 | NULL | NULL |
| 1521736500000 | NULL | NULL |
| 1521736800000 | NULL | NULL |
+---------------+------+----------+
20 rows in set (0.17 sec)
```
Because the time `1521736800000` does not exist in the `mips` table, it failed to properly join.
Interesting Discovery
---------------------
Even if I don't multiply the `time` column by `1000`, if I add the additional `group by` columns then the query still fails to operate how I expect:
```
select
mips.time,
@rtt := coalesce(sum(rtt*reqs)/sum(reqs), @rtt) as rtt,
(coalesce(sum(rtt*reqs)/sum(reqs), @rtt) - rtt_good) / (rtt_bad - rtt_good) as rtt_mips
from
mips
left join
(
select time, rtt, reqs from metrics where asn = '33095' and cty = 'us'
) t1 on mips.time = t1.time
group by time, rtt_good, rtt_bad
order by time asc;
```
Result:
```
+------------+-----------------+----------------------+
| time | rtt | rtt_mips |
+------------+-----------------+----------------------+
| 1521731100 | NULL | NULL |
| 1521731400 | NULL | NULL |
| 1521731700 | 12593 | 0.04197666666666667 |
| 1521732000 | NULL | NULL |
| 1521732300 | NULL | NULL |
| 1521732600 | NULL | NULL |
| 1521732900 | 41266.90234375 | 0.13755633333333334 |
| 1521733200 | NULL | NULL |
| 1521733500 | NULL | NULL |
| 1521733800 | NULL | NULL |
| 1521734100 | NULL | NULL |
| 1521734400 | NULL | NULL |
| 1521734700 | NULL | NULL |
| 1521735000 | 14979.439453125 | 0.049931333333333335 |
| 1521735300 | 11812.119140625 | 0.03937366666666667 |
| 1521735600 | NULL | NULL |
| 1521735900 | 8738.2314453125 | 0.02912743333333333 |
| 1521736200 | NULL | NULL |
| 1521736500 | NULL | NULL |
| 1521736800 | NULL | NULL |
+------------+-----------------+----------------------+
20 rows in set (0.12 sec)
```
I feel like I've run into a strange edge case with how the storage engine is optimizing these queries.<issue_comment>username_1: change query to this. You must initialize the var before you can calculate with it else it is NULL
```
select
mips.time,
@rtt := coalesce(sum(rtt*reqs)/sum(reqs), @rtt) as rtt,
(coalesce(rtt, @rtt) - rtt_good) / (rtt_bad - rtt_good) as rtt_mips
from
mips
left join
(
select * from metrics where asn = '33095' and cty = 'us'
) t1 on mips.time = t1.time
cross join ( select @rtt := 0 ) as init
group by time
order by time asc;
```
Upvotes: 0 <issue_comment>username_2: I think something like this should work a bit more predictably:
```
SELECT mips.time * 1000 AS mips_time,
@prev_rtt := coalesce(m_sum.weighted_rtt, @prev_rtt) as rtt,
(coalesce(m_sum.weighted_rtt, @prev_rtt) - rtt_good) / (rtt_bad - rtt_good) as rtt_mips
FROM
mips
LEFT JOIN
(
SELECT m.time, sum(m.rtt*m.reqs)/sum(m.reqs) AS weighted_rtt
FROM metrics AS m
WHERE m.asn = '33095' and m.cty = 'us'
GROUP BY m.time
) AS m_sum ON mips.time = m_sum.time
ORDER BY mips.time asc;
```
In my experience, `(@prev_rtt - rtt_good) / (rtt_bad - rtt_good) as rtt_mips` should work as well in this query, as the previous expression for `as rtt` should have assigned `@prev_rtt`; but that is venturing into "behaves this way, but not actually guaranteed by MySQL" territory as MySQL does not guarantee order of evaluation of select expressions.
Upvotes: 2 [selected_answer] |
2018/03/22 | 744 | 2,340 | <issue_start>username_0: I am a SQL Server newbie. I am trying to create test data.
I have a table that contains 10,000 part numbers(Table1).
I have another table that contains warehouses(Table2).
I have a third table(Table3) that will contain a row for every part number/warehouse combination.
Table1 will contain the part numbers, Table2 will contain the 6 warehouses and Table 3 will have a row for each part number/warehouse. That means I will end up with 60,000 rows in Table3. I have looked through all the JOINs and can't seem to find one that does the work. What I want to do is load Table3 with all the part number/warehouse rows with a starting value of 100 in a column called On\_Hand. Thank you for your assistance.<issue_comment>username_1: ```
INSERT INTO TABLE_3 ( Part_No, Location)
SELECT Part_No, Location from InventoryTable CROSS JOIN LocationTable WHERE
order by Part_No
```
Upvotes: 1 <issue_comment>username_2: Generate some data:
```
DECLARE @partNumbers TABLE
(
PartNumber INT
)
DECLARE @warehouses TABLE
(
Warehouse VARCHAR(20)
)
DECLARE @partNumberStart INT = 100
-- partnumbers 10000 starting at @partNumberStart -- for testing.
INSERT INTO @partNumbers
SELECT @partNumberStart + ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS ID FROM
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) x1(x),
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) x2(x),
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) x3(x),
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) x4(x)
--Warehouses 6 generated for testing
INSERT INTO @warehouses
SELECT x1.x + CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS VARCHAR(20)) AS ID FROM
(VALUES('Warehouse ')) x1(x),
(VALUES(0),(1),(2),(3),(4),(5)) x2(x) --6
--(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) x2(x), --10
```
Query the data:
```
SELECT * FROM @warehouses LEFT JOIN @partNumbers on 1=1
--OR
SELECT * FROM @warehouses CROSS JOIN @partNumbers
--OR
SELECT * FROM @warehouses, @partNumbers
```
Upvotes: 0 <issue_comment>username_3: You want `cross join`, used like this:
```
insert into inventory (part_no, warehouse_id, on_hand) -- table_3
select p.Part_No, w.warehouse_id, 500
from Parts it cross join -- table_1
Warehouses w; -- table_2
```
Upvotes: 0 |
2018/03/22 | 1,684 | 5,629 | <issue_start>username_0: I'm working on a benchmark program. Upon making the `read()` system call, the program appears to hang indefinitely. The target file is 1 GB of binary data and I'm attempting to read directly into buffers that can be 1, 10 or 100 MB in size.
I'm using `std::vector` to implement dynamically-sized buffers and handing off `&vec[0]` to `read()`. I'm also calling `open()` with the `O_DIRECT` flag to bypass kernel caching.
The essential coding details are captured below:
```
std::string fpath{"/path/to/file"};
size_t tries{};
int fd{};
while (errno == EINTR && tries < MAX_ATTEMPTS) {
fd = open(fpath.c_str(), O_RDONLY | O_DIRECT | O_LARGEFILE);
tries++;
}
// Throw exception if error opening file
if (fd == -1) {
ostringstream ss {};
switch (errno) {
case EACCES:
ss << "Error accessing file " << fpath << ": Permission denied";
break;
case EINVAL:
ss << "Invalid file open flags; system may also not support O_DIRECT flag, required for this benchmark";
break;
case ENAMETOOLONG:
ss << "Invalid path name: Too long";
break;
case ENOMEM:
ss << "Kernel error: Out of memory";
}
throw invalid_argument {ss.str()};
}
size_t buf_sz{1024*1024}; // 1 MiB buffer
std::vector buffer(buf\_sz); // Creates vector pre-allocated with buf\_sz chars (bytes)
// Result is 0-filled buffer of size buf\_sz
auto bytes\_read = read(fd, &buffer[0], buf\_sz);
```
Poking through the executable with gdb shows that buffers are allocated correctly, and the file I've tested with checks out in xxd. I'm using g++ 7.3.1 (with C++11 support) to compile my code on a Fedora Server 27 VM.
Why is `read()` hanging on large binary files?
Edit: Code example updated to more accurately reflect error checking.<issue_comment>username_1: >
> Most examples of `read()` hanging appear to be when using pipes or non-standard I/O devices (e.g., serial). Disk I/O, not so much.
>
>
>
`O_DIRECT` flag is useful for filesystems and block devices. With this flag people normally map pages into the user space.
For sockets, pipes and serial devices it is plain useless because the kernel does not cache that data.
---
Your updated code hangs because `fd` is initialized with `0` which is `STDIN_FILENO` and it never opens that file, then it hangs reading from `stdin`.
Upvotes: 0 <issue_comment>username_2: Pasting your program and running on my linux system, was a working and non-hanging program.
The most likely cause for the failure is the file is not a file-system item, or it has a hardware element which is not working.
Try with a smaller size - to confirm, and try on a different machine to help diagnose
My complete code (with no error checking)
```
#include
#include
#include
#include
#include
int main( int argc, char \*\* argv )
{
std::string fpath{"myfile.txt" };
auto fd = open(fpath.c\_str(), O\_RDONLY | O\_DIRECT | O\_LARGEFILE);
size\_t buf\_sz{1024\*1024}; // 1 MiB buffer
std::vector buffer(buf\_sz); // Creates vector pre-allocated with buf\_sz chars (bytes)
// Result is 0-filled buffer of size buf\_sz
auto bytes\_read = read(fd, &buffer[0], buf\_sz);
}
```
myfile.txt was created with
```
dd if=/dev/zero of=myfile.txt bs=1024 count=1024
```
* If the file is not 1Mb in size, it may fail.
* If the file is a pipe, it can block until the data is available.
Upvotes: 0 <issue_comment>username_3: There are multiple problems with your code.
This code will never work properly if `errno` ever has a value equal to `EINTR`:
```
while (errno == EINTR && tries < MAX_ATTEMPTS) {
fd = open(fpath.c_str(), O_RDONLY | O_DIRECT | O_LARGEFILE);
tries++;
}
```
That code won't stop when the file has been successfully opened and will keep reopening the file over and over and leak file descriptors as it keeps looping once `errno` is `EINTR`.
This would be better:
```
do
{
fd = open(fpath.c_str(), O_RDONLY | O_DIRECT | O_LARGEFILE);
tries++;
}
while ( ( -1 == fd ) && ( EINTR == errno ) && ( tries < MAX_ATTEMPTS ) );
```
Second, as noted in the comments, `O_DIRECT` can impose alignment restrictions on memory. You might need page-aligned memory:
So
```
size_t buf_sz{1024*1024}; // 1 MiB buffer
std::vector buffer(buf\_sz); // Creates vector pre-allocated with buf\_sz chars (bytes)
// Result is 0-filled buffer of size buf\_sz
auto bytes\_read = read(fd, &buffer[0], buf\_sz);
```
becomes
```
size_t buf_sz{1024*1024}; // 1 MiB buffer
// page-aligned buffer
buffer = mmap( 0, buf_sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, NULL );
auto bytes_read = read(fd, &buffer[0], buf_sz);
```
Note also the the Linux implementation of `O_DIRECT` can be very dodgy. It's been getting better, but there are still potential pitfalls that aren't very well documented at all. Along with alignment restrictions, if the last amount of data in the file isn't a full page, for example, you may not be able to read it if the filesystem's implementation of direct IO doesn't allow you to read anything but full pages (or some other block size). Likewise for `write()` calls - you may not be able to write just any number of bytes, you might be constrained to something like a 4k page.
**This is also critical:**
>
> Most examples of read() hanging appear to be when using pipes or non-standard I/O devices (e.g., serial). Disk I/O, not so much.
>
>
>
Some devices simply do not support direct IO. They should return an error, but again, the O\_DIRECT implementation on Linux can be very hit-or-miss.
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,372 | 4,522 | <issue_start>username_0: I'm trying to parse Oxford Dictionary in order to obtain the etymology of a given word.
```
class SkipException (Exception):
def __init__(self, value):
self.value = value
try:
doc = lxml.html.parse(urlopen('https://en.oxforddictionaries.com/definition/%s' % "good"))
except SkipException:
doc = ''
if doc:
table = []
trs = doc.xpath("//div[1]/div[2]/div/div/div/div[1]/section[5]/div/p")
```
I cannot seem to work out how to obtain the string of text I need. I know I lack some lines of code in the ones I have copied but I don't know how HTML nor LXML fully works. I would much appreciate if someone could provide me with the correct way to solve this.<issue_comment>username_1: >
> Most examples of `read()` hanging appear to be when using pipes or non-standard I/O devices (e.g., serial). Disk I/O, not so much.
>
>
>
`O_DIRECT` flag is useful for filesystems and block devices. With this flag people normally map pages into the user space.
For sockets, pipes and serial devices it is plain useless because the kernel does not cache that data.
---
Your updated code hangs because `fd` is initialized with `0` which is `STDIN_FILENO` and it never opens that file, then it hangs reading from `stdin`.
Upvotes: 0 <issue_comment>username_2: Pasting your program and running on my linux system, was a working and non-hanging program.
The most likely cause for the failure is the file is not a file-system item, or it has a hardware element which is not working.
Try with a smaller size - to confirm, and try on a different machine to help diagnose
My complete code (with no error checking)
```
#include
#include
#include
#include
#include
int main( int argc, char \*\* argv )
{
std::string fpath{"myfile.txt" };
auto fd = open(fpath.c\_str(), O\_RDONLY | O\_DIRECT | O\_LARGEFILE);
size\_t buf\_sz{1024\*1024}; // 1 MiB buffer
std::vector buffer(buf\_sz); // Creates vector pre-allocated with buf\_sz chars (bytes)
// Result is 0-filled buffer of size buf\_sz
auto bytes\_read = read(fd, &buffer[0], buf\_sz);
}
```
myfile.txt was created with
```
dd if=/dev/zero of=myfile.txt bs=1024 count=1024
```
* If the file is not 1Mb in size, it may fail.
* If the file is a pipe, it can block until the data is available.
Upvotes: 0 <issue_comment>username_3: There are multiple problems with your code.
This code will never work properly if `errno` ever has a value equal to `EINTR`:
```
while (errno == EINTR && tries < MAX_ATTEMPTS) {
fd = open(fpath.c_str(), O_RDONLY | O_DIRECT | O_LARGEFILE);
tries++;
}
```
That code won't stop when the file has been successfully opened and will keep reopening the file over and over and leak file descriptors as it keeps looping once `errno` is `EINTR`.
This would be better:
```
do
{
fd = open(fpath.c_str(), O_RDONLY | O_DIRECT | O_LARGEFILE);
tries++;
}
while ( ( -1 == fd ) && ( EINTR == errno ) && ( tries < MAX_ATTEMPTS ) );
```
Second, as noted in the comments, `O_DIRECT` can impose alignment restrictions on memory. You might need page-aligned memory:
So
```
size_t buf_sz{1024*1024}; // 1 MiB buffer
std::vector buffer(buf\_sz); // Creates vector pre-allocated with buf\_sz chars (bytes)
// Result is 0-filled buffer of size buf\_sz
auto bytes\_read = read(fd, &buffer[0], buf\_sz);
```
becomes
```
size_t buf_sz{1024*1024}; // 1 MiB buffer
// page-aligned buffer
buffer = mmap( 0, buf_sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, NULL );
auto bytes_read = read(fd, &buffer[0], buf_sz);
```
Note also the the Linux implementation of `O_DIRECT` can be very dodgy. It's been getting better, but there are still potential pitfalls that aren't very well documented at all. Along with alignment restrictions, if the last amount of data in the file isn't a full page, for example, you may not be able to read it if the filesystem's implementation of direct IO doesn't allow you to read anything but full pages (or some other block size). Likewise for `write()` calls - you may not be able to write just any number of bytes, you might be constrained to something like a 4k page.
**This is also critical:**
>
> Most examples of read() hanging appear to be when using pipes or non-standard I/O devices (e.g., serial). Disk I/O, not so much.
>
>
>
Some devices simply do not support direct IO. They should return an error, but again, the O\_DIRECT implementation on Linux can be very hit-or-miss.
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,298 | 4,084 | <issue_start>username_0: **Headnote:** I am having trouble removing spacing from around [Google's Material Design icons](https://material.io/), and cannot seem to find any solutions on Google or [the Material Design icons guide](http://google.github.io/material-design-icons/). I am not sure whether the answer is blatantly simple and I'm missing it, or whether there is a more profound reason as to why I am unable to accomplish a seemingly simple task.
Below you can find extracts from the relevant code in my project, or, alternatively, you can [view my full project here](https://github.com/NeocryptNetwork/neocryptnetwork.github.io).
* My markup,
```
*menu*
**Neocrypt**
Network
=========================
```
* the icon styling,
```
.material-icons.primary-header-material-icon-first-menu {
color: var(--primary-typeface-color);
font-size: 48px;
}
```
* the heading styling, and
```
.primary-header h1 {
text-align: center;
color: var(--primary-typeface-color);
display: inline;
font-family: var(--primary-typeface);
font-size: 60px;
line-height: 150px;
}
```
* the referenced variables (unrelated).
```
:root {
--primary-typeface-color: #ffffff;
--primary-typeface: 'Lato', sans-serif;
}
```
I would like the icon to appear directly beside the heading with no padding around the icon so that I can add spacing around the elements myself, almost like a reset! I've tried using `padding: 0px;`, in addition to a few other solutions to try and resolve the issue, however, it was to no avail.
**Footnote:** I am using [Eric Meyer's "Reset CSS"](https://meyerweb.com/eric/tools/css/reset), however, to my knowledge, this should have no effect on Google's Material Design icons.
---
**Update (24/03/2018 01:33 UTC):** It seems as though Google adds spacing around the icon in the image file itself, giving users no option to format said spacing. If anyone else has this same problem, I would recommend that you use another icon font, such as [Font Awesome](https://fontawesome.com/).<issue_comment>username_1: I tackled this problem by applying a **negative margin**. It works... but the way Font Awesome solved this is awesome, totally agree with @Michael Burns.
When applying the negative margin, the px will depend on the icon size and the specific icon. But at least it is still consistent in different browsers.
```css
.material-icons.primary-header-material-icon-first-menu {
margin-left: -2px;
}
```
Upvotes: 1 <issue_comment>username_2: what I did was wrapping the `icon` with a `span` and give it a fix `hight` and `width` then all I had to do was to hide the `overflow` .
That's how it looks in my browser.
[![result](https://i.stack.imgur.com/3qkbb.png)](https://i.stack.imgur.com/3qkbb.png)
An example for removing the white space from the icon.
```css
.print-element {
min-width: 175px;
min-height: 45px;
border: solid 1px #ccc;
color: rgba(0, 0, 0, 0.87);
display: inline-flex;
justify-content: center;
align-items: center;
text-align: center;
background: #fff;
border-radius: 4px;
margin-right: 25px 25px 15px 0px;
cursor: move;
position: relative;
z-index: 1;
box-sizing: border-box;
padding: 10px 50px 10px 10px;
transition: box-shadow 200ms cubic-bezier(0, 0, 0.2, 1);
box-shadow: 0 3px 1px -2px rgba(0, 0, 0, 0.2), 0 2px 2px 0 rgba(0, 0, 0, 0.14), 0 1px 5px 0 rgba(0, 0, 0, 0.12);
}
.resize-circle {
position: relative;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
background-color: white;
border: .1px solid white;
color: #aaa;
cursor: pointer;
}
span {
width: 20px;
height: 20px;
background: white;
position: absolute;
top: -7px;
border-radius: 50%;
right: -7px;
overflow: hidden;
}
```
```html
Tag Number
*highlight\_off*
```
Upvotes: 2 <issue_comment>username_3: Removing the padding manually is not a scalable solution, so I just created a tool to remove the padding from all the icons in the set. It does require you to create a new icon set, but it might be helpful:
<https://github.com/jgillick/IconsetCropper>
Upvotes: 0 |
2018/03/22 | 336 | 1,396 | <issue_start>username_0: I try to find out the timeout of the Apache HttpClient. The doc file\* says that the default timeout for http connections is the "system default" timeout. But what is the "system default"? And how can I find out what the value for the "system default" timeout is set to?
\*"A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default).
Default: -1"(<https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.html#getConnectTimeout()>)<issue_comment>username_1: According to the `documentation`, the `http.socket.timeout` parameter controls the `SO_TIMEOUT` value
**AND**
You can set default timeout by setParameter() method of HttpClient,
```
HttpClient httpclient = new HttpClient();
httpclient.getParams().setParameter("http.protocol.version",HttpVersion.HTTP_1_1);
httpclient.getParams().setParameter("http.socket.timeout", new Integer(1000));
httpclient.getParams().setParameter("http.protocol.content-charset", "UTF-8");
```
Upvotes: 0 <issue_comment>username_2: System default in this particular situation means whatever socket timeout value set by the Java runtime. If the socket timeout configuration parameter is undefined, HttpClient makes no attempts to control the SO\_TIMEOUT setting on connection sockets.
Upvotes: 2 [selected_answer] |
2018/03/22 | 524 | 1,842 | <issue_start>username_0: I have converted the video into 3 formats such as .mp4, .webm, .gov But still background video is not playing in safari browser
```
```
page url is <http://gnxtsystems.com/cookie-test/>
Please help me to fix it. Thanks in advance.<issue_comment>username_1: It might be because of the mime type. Try only mp4 file. And for some reason, videos would not play on iPad unless I set the controls="true" flag.
Example: This worked for me on iPhone but not iPad.
```
```
And this now works on both iPad and iPhone:
```
```
Upvotes: 1 <issue_comment>username_2: you need to use a poster attribute according to standards and must not load the video background on mobile devices. Then a ogv is a webm format so you'll need to use:
```
```
No need to load webm as you'll load ogv and only if it's not possible to load ogv it will load mp4. The poster attribute is used since the video loads and should be used on mobile devices as background without loading the video according to mobile-first design to not waste visitor's data and to get a benefit on load time.
**EDIT:**
And try to use always names without spaces when working on web:
```
videos/2.0-Welcome-to-DISTRO_1 (1).ogv
should be:
videos/2.0-Welcome-to-DISTRO_1_1.ogv
```
Here is a working example that you can inspect:
<http://joelbonetr.com/>
Upvotes: 0 <issue_comment>username_3: Try these two things..
add playsinline attribute in video tag like this
```
```
and secondly for apple devices you will have to turn off the low power mode.
then check...it will work
Upvotes: 4 <issue_comment>username_4: It's quite simple if you are using it in React. You just need to enable it to play inline and disable "picture in picture" feature.
```
```
Upvotes: 2 <issue_comment>username_5: You can try WEBM formate, hope it helps!
```
```
Upvotes: 0 |
2018/03/22 | 1,136 | 3,669 | <issue_start>username_0: For a dataframe containing a mix of string and numeric datatypes, the goal is to create a new `features` column that is a `minhash` of all of them.
While this could be done by performing a `dataframe.toRDD` it is expensive to do that when the next step will be to simply convert the `RDD` *back* to a dataframe.
So is there a way to do a `udf` along the following lines:
```
val wholeRowUdf = udf( (row: Row) => computeHash(row))
```
`Row` is not a `spark sql` datatype of course - so this would not work as shown.
**Update/clarifiction** I realize it is easy to create a full-row UDF that runs inside `withColumn`. What is not so clear is what can be used inside a `spark sql` statement:
```
val featurizedDf = spark.sql("select wholeRowUdf( what goes here? ) as features
from mytable")
```<issue_comment>username_1: I came up with a workaround: drop the column names into any existing `spark sql` function to generate a new output column:
```
concat(${df.columns.tail.mkString(",'-',")}) as Features
```
In this case the first column in the dataframe is a target and was excluded. That is another advantage of this approach: the actual list of columns many be manipulated.
This approach avoids unnecessary restructuring of the RDD/dataframes.
Upvotes: 2 [selected_answer]<issue_comment>username_2: >
>
> >
> > Row is not a spark sql datatype of course - so this would not work as shown.
> >
> >
> >
>
>
>
**I am going to show that you can use Row to pass all the columns or selected columns to a udf function using struct inbuilt function**
First I define a `dataframe`
```
val df = Seq(
("a", "b", "c"),
("a1", "b1", "c1")
).toDF("col1", "col2", "col3")
// +----+----+----+
// |col1|col2|col3|
// +----+----+----+
// |a |b |c |
// |a1 |b1 |c1 |
// +----+----+----+
```
Then I define *a function to make all the elements in a row as one string separated by `,`* (as you have computeHash function)
```
import org.apache.spark.sql.Row
def concatFunc(row: Row) = row.mkString(", ")
```
Then I use it in `udf` function
```
import org.apache.spark.sql.functions._
def combineUdf = udf((row: Row) => concatFunc(row))
```
Finally I call the `udf` function using `withColumn` function and `struct` *inbuilt function* combining selected columns as one column and pass to the `udf` function
```
df.withColumn("contcatenated", combineUdf(struct(col("col1"), col("col2"), col("col3")))).show(false)
// +----+----+----+-------------+
// |col1|col2|col3|contcatenated|
// +----+----+----+-------------+
// |a |b |c |a, b, c |
// |a1 |b1 |c1 |a1, b1, c1 |
// +----+----+----+-------------+
```
So you can see that *Row can be used to pass whole row as an argument*
You can even **pass all columns in a row at once**
```
val columns = df.columns
df.withColumn("contcatenated", combineUdf(struct(columns.map(col): _*)))
```
**Updated**
You can *achieve the same with sql queries* too, you just n*eed to register the udf function* as
```
df.createOrReplaceTempView("tempview")
sqlContext.udf.register("combineUdf", combineUdf)
sqlContext.sql("select *, combineUdf(struct(`col1`, `col2`, `col3`)) as concatenated from tempview")
```
It will give you the same result as above
Now if you *don't want to hardcode the names of columns then you can select the column names according to your desire and make it a string*
```
val columns = df.columns.map(x => "`"+x+"`").mkString(",")
sqlContext.sql(s"select *, combineUdf(struct(${columns})) as concatenated from tempview")
```
I hope the answer is helpful
Upvotes: 4 |
2018/03/22 | 1,487 | 5,583 | <issue_start>username_0: I searched and could not find this answer anywhere else. Apologies if it's a bit of a noob question for someone not that experienced at scripting.
I'm trying to create a bash script to setup a server with all software needed for an application. In short, when installing docker I need to add the current user to the docker group 'usermod -aG docker ', and from there, pull some containers.
The problem I have is that because I've added the user to a group, they need to be logged out and back in again before they have any permission to do anything later in the script. This of course breaks the script and ends the shell session.
Is there a way to log out and back in again within the same script, or do things need to get a little more complicated?
Appreciate anyone's help on this. Hope it's a simple answer for someone.<issue_comment>username_1: As far as I know - NO.
To avoid this problem, I personally use ansible (ansible uses SSH). I break connection after adding user to group, reconnect, and continue script.
Upvotes: 0 <issue_comment>username_2: You can't upgrade groups for the current script. You have to log in again.
However, that does not mean you have to log out first.
You can use any command like `sudo`, `sg` or `su` that technically creates a new session:
```
usermod -aG docker "$USER" # Add to group
sg "$(id -gn)" -c "groups" # Create new session and show groups including that one
```
Upvotes: -1 <issue_comment>username_3: The user needs to be login again and then only the member added to the new group will have permissions to access the files/folders or run any command.
The link <https://unix.stackexchange.com/questions/6387/i-added-a-user-to-a-group-but-group-permissions-on-files-still-have-no-effect/11573#11573> gives a detailed explanation why this needs to be done.
You need to use any of the way to login again and then only you can proceed with your script.
In case you need to use only a single user, then you can use the below command to login again:
```
su -l USERNAME
```
This will ask password here. For that you can use `pam` authentication module to use the above command without password. You need to add the below line after `pam_rootok.so` in the file `/etc/pam.d/su`
```
auth [success=ignore default=1] pam_succeed_if.so user = USERNAME
auth sufficient pam_succeed_if.so use_uid user = USERNAME
```
I believe this can be done via your current script only and there won't be any need to break the script.
(NOTE: Please replace the `USERNAME` in the above commands with your actual username)
Upvotes: 0 <issue_comment>username_4: Use the **newgrp** command to login to a new group.
The way newgrp works is that it changes the group identification of its caller, analogously to login. The same person remains logged in, and the current directory is unchanged, but calculations of access permissions to files are performed with respect to the new group ID.
So for your case, you’ll use:
```
# usermod -aG docker user
# newgrp docker
```
Check your new primary group, it should be docker:
```
$ id -g
989
```
Confirm from /etc/group
```
$ cat /etc/group | grep `id -g`
docker:x:989:jmutai
```
This should do the trick.
Upvotes: 4 <issue_comment>username_5: ```
newgrp << END
my\_command
END
```
see my answer here:
<https://superuser.com/questions/272061/reload-a-linux-users-group-assignments-without-logging-out#>
Upvotes: 0 <issue_comment>username_6: I worked around this issue by setting the setgid flag on the `docker` binary:
```
sudo chgrp docker $(which docker)
sudo chmod g+s $(which docker)
```
The first line changes the group of the `docker` binary to the `docker` group. The second line enables the setgid flag, which means when you run this binary your group changes the file's group, which we just set to `docker`.
This is a security issue because it makes it so that effectively everyone is in the `docker` group, but I did this inside of a container where the only user is the one that I want to add to the `docker` group anyway. So this solution is only good for specific cases, but in those cases it seems to work well.
Upvotes: 2 <issue_comment>username_7: use this command:
```
exec su -l $USER
```
or:
```
exec sudo su -l $USER
```
Upvotes: 2 <issue_comment>username_8: At 2023-08-12, the highest-voted answer to this question proposes using `newgrp docker`. `newgrp docker` will set that user's *primary group* to the group specified (you can verify this by running `groups` and seeing that `docker` **is the first value** returned).
But the question is looking for an alternative to logging out and logging back in again. Logging out and logging back in again **will not** modify the *primary group* of that user - it will have the effect of updating the list of *secondary groups* to include the previously added group. (Again, you can verify this by running `groups` and seeing that `docker` is again in the list of groups returned, but it is **not the first value**). Hence running `newgrp` *will not* have the same effect as logging out and logging back in again. Running `newgrp` and logging out and logging back in again will result in 2 different group membership lists.
If you want to update that user's list of *secondary groups* to include the previously added group (i.e. if you want a command that has the same effect as logging out and logging back in again) try: `exec sg docker newgrp`
HTH.
(For a bonus point, also note that (unlike `su`) `exec sg docker newgrp` works *without* requiring a password.)
Upvotes: 0 |
2018/03/22 | 708 | 2,221 | <issue_start>username_0: Here we are calling sizeof operator on the derived class WData1. As I know, first base class constructor (Persistent) will be called. Till now WData1 doesn't exist because class Persistent constructor is being called and class Data is waiting for his turn.
\*
```
class WData1 : public Persistent, public Data {
public:
WData1(float f0 = 0.0, float f1 = 0.0,
float f2 = 0.0) : Data(f0, f1, f2),
Persistent(sizeof(WData1)) {}};
```
\*
**My question is how sizeof will behave on derived class which doesn't exist yet?**<issue_comment>username_1: >
> My question is how sizeof will behave on derived class which doesn't
> exist yet?
>
>
>
The class already exists so `sizeof` has no problem with it. `sizeof` is a compile-time construct, all the information it needs it already has when you're compiling.
It is true that the `Persistent` constructor is called before the `WData1` constructor but that doesn't matter since the type information of `WData1` is already known. A constructor call is a run-time construct, `sizeof` isn't.
Upvotes: 1 <issue_comment>username_2: >
> My question is how sizeof will behave on derived class which doesn't exist yet?
>
>
>
`sizeof(WData1)` is the size of class `WData1` (including parent base `Persistent` and `Data`).
Even if you create
```
struct Derived : WData1
{
char BigBuffer[1024 * 42];
};
```
`sizeof(WData1)` won't change.
You might be tempted to use `sizeof(*this)`
```
class WData1 : public Persistent, public Data {
public:
WData1(float f0 = 0.0, float f1 = 0.0, float f2 = 0.0)
: Data(f0, f1, f2),
Persistent(sizeof(*this))
{}
};
```
But `sizeof` is compile time and would use static type so `WData1`.
You have to propagate the size from derived class to parent:
```
class WData1 : public Persistent, public Data {
public:
WData1(float f0 = 0.0f,
float f1 = 0.0f,
float f2 = 0.0f,
std::size_t size = size_of(WData1))
: Data(f0, f1, f2),
Persistent(size)
{}
};
struct Derived : WData1
{
Derived(std::size_t size = size_of(Derived)) : WData1(0.f, 0.f, 0.f, size) {}
char BigBuffer[1024 * 42];
};
```
Upvotes: 0 |
2018/03/22 | 480 | 1,670 | <issue_start>username_0: I have `ActiveForm` checkbox:
```
= $form-field($model, 'is_necessary')->checkbox(['uncheck'=> 0]) ?>;
```
I want to make it checked by default and when I check, it's value become 1 and when uncheck - 0. Can I achieve this without any `javascript`?
I tried :
```
= $form-field($model, 'is_necessary')->checkbox(['uncheck'=> 0, 'value'=>false]) ?>;
```
option `'value'=>false` made my checkbox checked by default but then in controller I receive `NULL` nor either `1` or `0`.<issue_comment>username_1: just add in your controller or view (which is not recommended) below code
```
$model->is_necessary = true;
```
above code works fine. but you should add this code before your
```
$model->load(Yii::$app->request->post)
```
method or assigining post data to your model. Otherwise your checkbox will be checked any time;
Upvotes: 3 [selected_answer]<issue_comment>username_2: The best approach is to override `init()` inside your model
```
public function init() {
parent::init ();
$this->is_necessary = 1;
}
```
and you don't need to pass the `'uncheck'=> 0,` as per the [**`DOCS`**](http://www.yiiframework.com/doc-2.0/yii-widgets-activefield.html#checkbox()-detail)
>
> `uncheck` : `string`, the value associated with the unchecked state of the
> radio button. If not set, it will take the default value `0`. This
> method will render a hidden input so that if the radio button is not
> checked and is submitted, the value of this attribute will still be
> submitted to the server via the hidden input. If you do not want any
> hidden input, you should explicitly set this option as null.
>
>
>
Upvotes: 1 |
2018/03/22 | 516 | 1,803 | <issue_start>username_0: ![Data Source Copy able format ](https://1drv.ms/x/s!AsjLRWFsxshm6jkl28KGgmS3lOaX),
![Sample Data Image](https://i.stack.imgur.com/zUpgb.png)
I want to get the percentage increase in price by **Country and City** based on latest transaction date and date of Previous Transaction.
How can I Query this? I am not getting it. This is What I have tried:
```
SELECT Country,City, Price
From tbl
Group by Country,City
```
Percentage increase = [( Latest Price - Previous Price ) / Previous Price] \* 100
Expected Outout:
Unique Country and City Name + Percentage increase in Price.
Country | City | Percentage<issue_comment>username_1: just add in your controller or view (which is not recommended) below code
```
$model->is_necessary = true;
```
above code works fine. but you should add this code before your
```
$model->load(Yii::$app->request->post)
```
method or assigining post data to your model. Otherwise your checkbox will be checked any time;
Upvotes: 3 [selected_answer]<issue_comment>username_2: The best approach is to override `init()` inside your model
```
public function init() {
parent::init ();
$this->is_necessary = 1;
}
```
and you don't need to pass the `'uncheck'=> 0,` as per the [**`DOCS`**](http://www.yiiframework.com/doc-2.0/yii-widgets-activefield.html#checkbox()-detail)
>
> `uncheck` : `string`, the value associated with the unchecked state of the
> radio button. If not set, it will take the default value `0`. This
> method will render a hidden input so that if the radio button is not
> checked and is submitted, the value of this attribute will still be
> submitted to the server via the hidden input. If you do not want any
> hidden input, you should explicitly set this option as null.
>
>
>
Upvotes: 1 |
2018/03/22 | 948 | 2,684 | <issue_start>username_0: During making some heatmap I face with some problem. All my cells is painting with yellow! MLOGIC show that all the statements TRUE with yellow colour but the values in cells is a different? there is all red and white. Could you tell me my mistake& Thank you! The code and log below:
```
%macro main;
ods html body='temp.html';
proc report data=step3 nowd;
column kri_id range_mid_1 range_mid_2
%do i=1 %to 9;
a2017_M0&i. %end;
;
define kri_id / display;
define range_mid_1 / display;
define range_mid_2 / display;
%do i=1 %to 9;
define a2017_M0&i. / display;
%end;
%do p=1 %to 9;
compute a2017_M0&p.
%if a2017_M0&p. > range_mid_2
%then call define(_col_, "style", "STYLE=[background=red]");
%else %if range_mid_1 < a2017_M0&p. < range_mid_2
%then call define(_col_, "style", "STYLE=[background=yellow]");
;endcomp;
%end;
;run;
ods html close;
ods html body='temp.html';
%mend; %main;
```
Log file is bellow
```
SYMBOLGEN: Macro variable P resolves to 1
MLOGIC(MAIN): %IF condition a2017_M0&p. > range_mid_2 is FALSE
SYMBOLGEN: Macro variable P resolves to 1
MLOGIC(MAIN): %IF condition range_mid_1 < a2017_M0&p. < range_mid_2 is TRUE
MPRINT(MAIN): call define(_col_, "style", "STYLE=[background=yellow]") ;
MPRINT(MAIN): endcomp;
MLOGIC(MAIN): %DO loop index variable P is now 2; loop will iterate again.
SYMBOLGEN: Macro variable P resolves to 2
MPRINT(MAIN): compute a2017_M02;
SYMBOLGEN: Macro variable P resolves to 2
MLOGIC(MAIN): %IF condition a2017_M0&p. > range_mid_2 is FALSE
SYMBOLGEN: Macro variable P resolves to 2
MLOGIC(MAIN): %IF condition range_mid_1 < a2017_M0&p. < range_mid_2 is TRUE
MPRINT(MAIN): call define(_col_, "style", "STYLE=[background=yellow]") ;
MPRINT(MAIN): endcomp;
MLOGIC(MAIN): %DO loop index variable P is now 3; loop will iterate again.
SYMBOLGEN: Macro variable P resolves to 3
MPRINT(MAIN): compute a2017_M03;
SYMBOLGEN: Macro variable P resolves to 3
MLOGIC(MAIN): %IF condition a2017_M0&p. > range_mid_2 is FALSE
SYMBOLGEN: Macro variable P resolves to 3
```
And ETC<issue_comment>username_1: You are using macro logic where you want actual SAS code logic.
The reason you are always getting the `%ELSE` clause is because the letter `a` is less than the letter `r` so this test is always false.
```
%if a2017_M0&p. > range_mid_2
```
To the macro processor `a2017_M01` and `range_mid_2` are just text strings. The macro processor knows nothing of your data set variables.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Thank you! With your help I found the answer here. I had to use IF THE ELSE without %. Than it compares the variables
Upvotes: 0 |
2018/03/22 | 658 | 2,538 | <issue_start>username_0: I have an array I build up when a row is selected in my bootstrap data table. I then need to pass this from my view to the controller which is going to fire up a partial view. However when I execute the code I get null reference exception in my controller. The code in the controller is just a placeholder for the actual use but is there to show I intend to be looping the array when it is loaded through. Any ideas as to why it would show NULL even though I can see in debug it has values.
AJAX:
```
function MoveSelected() {
$.ajax({
type: "Get",
url: '@Url.Action("MoveSelectedRoute", "Transport")',
data: { orders: completeArray },
success: function (data) {
$('#detail_MoveSelectedOrders').html(data);
$('#modalMoveSelectedOrders').modal('show');
}
})
}
```
Controller:
```
public ActionResult MoveSelectedRoute(string[] orders)
{
string OrdersToMove = string.Empty;
foreach (string row in orders)
{
string orderNo = orders.ToString().PadLeft(10, '0');
if (OrdersToMove == string.Empty)
{
OrdersToMove = orderNo;
}
else
OrdersToMove = OrdersToMove + "," + orderNo;
}
}
```<issue_comment>username_1: You have to use `JSON.stringify({ orders: completeArray })` and your C# code will map your array with its parameter.
Upvotes: 2 <issue_comment>username_2: You need to add the `traditional: true` ajax option to post back an array to the collection.
```
$.ajax({
type: "Get",
url: '@Url.Action("MoveSelectedRoute", "Transport")',
data: { orders: completeArray },
traditional true,
success: function (data) {
....
}
})
```
Note the `traditional: true` option only works for simple arrays.
And alternative would be to stringify the data, set the content type to `application/json` and make a POST rather than a GET
```
$.ajax({
type: "Get",
url: '@Url.Action("MoveSelectedRoute", "Transport")',
data: JSON.stringify({ orders: completeArray }),
contentType: "application/json; charset=utf-8"
success: function (data) {
....
}
})
```
The final alternative is to generate send the values with collection indexers
```
var data = {
orders[0] = 'abc',
orders[1] = 'xyz',
orders[2] = '...'
}
$.ajax({
type: "Get",
url: '@Url.Action("MoveSelectedRoute", "Transport")',
data: data,
success: function (data) {
....
}
})
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,287 | 5,068 | <issue_start>username_0: I was trying to overwrite SetPasswordForm to add a placeholder and a class but it seems that it is not working. I managed to do it for login page and password reset page but here I got stuck.
```css
class MySetPasswordForm(SetPasswordForm):
new_password1 = forms.CharField(
label=_("New password"),
widget=forms.PasswordInput(attrs={'placeholder': 'New Password', 'class': 'password1'}),
strip=False,
help_text=password_validation.password_validators_help_text_html(),
)
new_password2 = forms.CharField(
label=_("New password confirmation"),
strip=False,
widget=forms.PasswordInput(attrs={'placeholder': 'Repeat Password', 'class': 'password2'}),
)
```
```html
urlpatterns = [ path('accounts/password_reset/', auth_views.PasswordResetView.as_view(
form_class=MyPasswordResetForm)),
path('accounts/password_reset_confirm/', auth_views.PasswordResetConfirmView.as_view(
form_class=MySetPasswordForm)),
path('accounts/', include('django.contrib.auth.urls')),]
```
Is this accurate ?<issue_comment>username_1: You’re using the wrong path in your URL patterns. It should be:
```
'reset///'
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> **Override or customize the django auth setPasswordFrom and
> PasswordResetForm**, answer already give but without using path() in
> django url i.e django 1.11 I have done in below ways. Copy the
> setPasswordFrom form folder structure django.contrib.auth.forms import
> SetPasswordForm.
>
>
> In urls.py
>
>
>
```
from django.conf import settings
from django.conf.urls import url
from django.conf.urls.static import static
from django.contrib.auth import views as auth_views
from GetAdmin360.forms import (EmailValidationOnForgotPassword, CustomSetPasswordForm)
urlpatterns = [
url(r'^reset/(?P[0-9A-Za-z\_\-]+)/(?P[0-9A-Za-z]{1,13}-[0-9A-Za-z]{1,20})/$', auth\_views.PasswordResetConfirmView.as\_view(form\_class = CustomSetPasswordForm), {'template\_name': 'registration/password\_reset\_confirm.html'}, name='password\_reset\_confirm'),
] + static(settings.STATIC\_URL, document\_root=settings.STATIC\_ROOT)
```
>
> In forms.py in which I have override the setPasswordFrom and
> PasswordResetForm
>
>
>
```
from django.contrib.auth.forms import (PasswordResetForm, SetPasswordForm)
from MyApp.models import MyUser
class EmailValidationOnForgotPassword(PasswordResetForm):
def clean_email(self):
email = self.cleaned_data['email']
if not MyUser.objects.filter(email__iexact=email, is_active=True).exists():
raise ValidationError("The email address you entered is not registered. Please enter registered email id")
return email
class CustomSetPasswordForm(SetPasswordForm):
"""
A form that lets a user change set their password without entering the old
password
"""
error_messages = {
'password_mismatch': _("The two password fields didn't match."),
'password_notvalid': _("Password must of 8 Character which contain alphanumeric with atleast 1 special charater and 1 uppercase."),
}
new_password1 = forms.CharField(
label=_("New password"),
widget=forms.PasswordInput,
strip=False,
help_text=password_validation.password_validators_help_text_html(),
)
new_password2 = forms.CharField(
label=_("New password confirmation"),
strip=False,
widget=forms.PasswordInput,
)
def __init__(self, user, *args, **kwargs):
self.user = user
super(SetPasswordForm, self).__init__(*args, **kwargs)
def clean_new_password2(self):
password1 = self.cleaned_data.get('new_password1')
password2 = self.cleaned_data.get('new_password2')
if password1 and password2:
if password1 != password2:
raise forms.ValidationError(
self.error_messages['password_mismatch'],
code='password_mismatch',
)
# Regix to check the password must contains sepcial char, numbers, char with upeercase and lowercase.
regex = re.compile('((?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[@#$%]).{8,30})')
if(regex.search(password1) == None):
raise forms.ValidationError(
self.error_messages['password_notvalid'],
code='password_mismatch',
)
password_validation.validate_password(password2, self.user)
return password2
def save(self, commit=True):
password = <PASSWORD>.cleaned_data["<PASSWORD>"]
self.user.set_password(password)
if commit:
self.user.save()
email = self.user.email
instance = MyUser.objects.get(id=self.user.id)
if not instance.first_login:
instance.first_login = True
instance.save()
return self.user
```
Upvotes: 0 |
2018/03/22 | 1,030 | 4,318 | <issue_start>username_0: I have an application with a `TreeView` control that is built using a set of data types representing different levels in the hierarchy and an accompanying set of `HierarchicalDataTemplate`s. What I want to do now is set appropriate `AutomationProperties.Name` values on the tree items.
Normally, I would use `TreeView.ItemContainerStyle` to bind the accessible name, but this is rather limited, as it requires I use a binding path that works for all types.
In this case, however, I would much rather be able to control the accessible name independently for each type. For example, it may be useful to include `Id` in some layers, but not in others.
I could probably live with using the displayed text, but while I can easily use a `RelativeSource` binding in `TreeView.ItemContainerStyle` to get at the `TreeViewItem`, the `Path` needed to ultimately reach the `TextBlock.Text` value in the templated item from there eludes me.
I have also tried using `HierarchicalDataTemplate.ItemContainerStyle`, but that only applies to child items. Even further, when I tried to define it on each template, only `BazItem`s were properly set, even though I would have expected `BarItem`s to work as well.
I put together a minimal example to illustrate the issue. The item types are as follows:
```
public sealed class BazItem
{
public BazItem(int id, string name)
{
Id = id;
Name = name ?? throw new ArgumentNullException(nameof(name));
}
public int Id { get; }
public string Name { get; }
}
public sealed class BarItem
{
public BarItem(int id, string display)
{
Id = id;
Display = display ?? throw new ArgumentNullException(nameof(display));
Bazs = new ObservableCollection();
}
public int Id { get; }
public string Display { get; }
public ObservableCollection Bazs { get; }
}
public sealed class FooItem
{
public FooItem(int id, string name)
{
Id = id;
Name = name ?? throw new ArgumentNullException(nameof(name));
Bars = new ObservableCollection();
}
public int Id { get; }
public string Name { get; }
public ObservableCollection Bars { get; }
}
```
The corresponding templates are as follows:
```
```
Finally, the tree view in the view is as follows:
```
```
where Foos is an `ObservableCollection` property on the underlying view.<issue_comment>username_1: The solution I came up with for now (pending a better answer) is to change the `TreeView.ItemContainerStyle` to use a value converter against the tree item object rather than a property on the object:
TreeView element:
```
<Setter Property="AutomationProperties.Name" Value="{Binding Converter={StaticResource AccessibleConverter}}"/>
```
Converter:
```
[ValueConversion(typeof(object), typeof(string), ParameterType = typeof(Type))]
public sealed class AccessibleTextConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
switch (value)
{
case FooItem foo:
return $"foo: {foo.Name}";
case BarItem bar:
return $"bar: {bar.Display}";
case BazItem baz:
return $"baz: {baz.Name}";
default:
return Binding.DoNothing;
}
}
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
{
throw new NotImplementedException();
}
}
```
This works, but it's less-than-ideal for several reasons:
1. it duplicates the format strings (though I could pull them out into a shared resource location)
2. the value converter has to first convert `object` to the appropriate type before it can come up with the appropriate string format
3. adding a new tree item type requires I touch both the templates as well as the value converter
Upvotes: 1 <issue_comment>username_2: I came along the same issue and the way I solved it was by overriding the ToString() method of the class of the object i was binding the TreeViewItem.
For example, in your case the type of item you bind to TreeViewItem is of type BazItem, go to BazItem class and:
```
public override string ToString()
{
// This is what the Windows Narrator will use now
return "Name of Baz Item";
}
```
Upvotes: 0 |
2018/03/22 | 620 | 1,473 | <issue_start>username_0: example of input file:
```
LINE1 LINE1 M) W1 W2 W3}N) REST
LINE2 LINE2 SOME OTHER WORDS
LINE3 LINE3 LINE3 M) Z1 Z2 Z3 Z4}N) REST
```
in matched lines between patterns M) and N) I need to replace all spaces but the first one, expected output is:
```
LINE1 LINE1 M) W1W2W3}N) REST
LINE2 LINE2 SOME OTHER WORDS
LINE3 LINE3 LINE3 M) Z1Z2Z3Z4}N) REST
```<issue_comment>username_1: Following `awk` may help you on same.
```
awk 'match($0,/M) .*N)/){val=substr($0,RSTART,RLENGTH);gsub(/ +/,"",val);sub("M)","M) ",val);print substr($0,1,RSTART-1) val substr($0,RSTART+RLENGTH+1);next} 1' Input_file
```
Adding non-one liner form of solution too now.
```
awk '
match($0,/M) .*N)/){
val=substr($0,RSTART,RLENGTH);
gsub(/ +/,"",val);
sub("M)","M) ",val);
print substr($0,1,RSTART-1) val substr($0,RSTART+RLENGTH+1);
next}
1
' Input_file
```
Output will be as follows.
```
LINE1 LINE1 M) W1W2W3}N)REST
LINE2 LINE2 SOME OTHER WORDS
LINE3 LINE3 LINE3 M) Z1Z2Z3Z4}N)REST
```
Upvotes: 0 <issue_comment>username_2: GNU **`awk`** solution:
```
awk 'match($0, /(M\) )(.+N\))/, a){
gsub(/[[:space:]]+/, "", a[2]);
sub(/M\) .+N\)/, a[1] a[2])
}1' file
```
The output:
```
LINE1 LINE1 M) W1W2W3}N) REST
LINE2 LINE2 SOME OTHER WORDS
LINE3 LINE3 LINE3 M) Z1Z2Z3Z4}N) REST
```
Upvotes: 1 <issue_comment>username_3: You can try this sed :
```
sed -E ':A;s/(.*M\) )([^ ]*)( )(.*N\).*)/\1\2\4/;tA' infile
```
Upvotes: 0 |
2018/03/22 | 1,827 | 6,821 | <issue_start>username_0: In Java I would declare a function like this:
```
public boolean Test(boolean test) throws Exception {
if (test == true)
return false;
throw new Exception();
}
```
And I can use this function without handling the exception.
If it is possible, how to do the same in Typescript? The compiler will tell me that I can't use the function without a try/catch.<issue_comment>username_1: There is no such feature in TypeScript. It's possible to specify error type only if a function returns an error, not throws it (this rarely happens and is prone to be antipattern).
The only relevant type is [`never`](https://www.typescriptlang.org/docs/handbook/basic-types.html#never). It is applicable only if a function definitely throws an error, it cannot be more specific than that. It's a type as any other, it won't cause type error as long as it doesn't cause type problems:
```
function Test(): never => {
throw new Error();
}
Test(); // won't cause type error
let test: boolean = Test(); // will cause type error
```
When there is a possibility for a function to return a value, `never` is absorbed by return type.
It's possible to specify it in function signature, but for reference only:
```
function Test(test: boolean): boolean | never {
if (test === true)
return false;
throw new Error();
}
```
It can give a hint to a developer that unhandled error is possible (in case when this is unclear from function body), but this doesn't affect type checks and cannot force `try..catch`; the type of this function is considered `(test: boolean) => boolean` by typing system.
Upvotes: 8 [selected_answer]<issue_comment>username_2: It is not possible at this moment. You can check out this requested feature:
<https://github.com/microsoft/TypeScript/issues/13219>
Upvotes: 5 <issue_comment>username_3: You cannot using pure ts (v<3.9) I hope it will be available in the future.
A workaround is however possible, it consists of hiding the possible thrown types in the method's signature to then recover those types in the catch block.
I made a package with this workaround here: <https://www.npmjs.com/package/ts-throwable/v/latest>
usage is more or less as follow:
```js
import { throwable, getTypedError } from 'ts-throwable';
class CustomError extends Error { /*...*/ }
function brokenMethod(): number & throwable {
if (Math.random() < 0.5) { return 42 };
throw new CustomError("Boom!");
}
try {
const answer: number = brokenMethod()
}
catch(error){
// `typedError` is now an alias of `error` and typed as `CustomError`
const typedError = getTypedError(error, brokenMethod);
}
```
Upvotes: 1 <issue_comment>username_4: You could treat JavaScript's `Error` as Java's `RuntimeException` (unchecked exception).
You can extend JavaScript's `Error` but [you have to use `Object.setPrototypeOf`](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-2.html#support-for-newtarget) to restore the prototype chain because `Error` breaks it. The need for setPrototypeOf is explained in [this answer](https://stackoverflow.com/a/48342359/2692914) too.
```
export class AppError extends Error {
code: string;
constructor(message?: string, code?: string) {
super(message); // 'Error' breaks prototype chain here
Object.setPrototypeOf(this, new.target.prototype); // restore prototype chain
this.name = 'AppError';
this.code = code;
}
}
```
Upvotes: 2 <issue_comment>username_5: You can mark the function with [`@throws`](https://jsdoc.app/tags-throws.html) jsdoc at least. Even though it does not provide static analysis errors in typescript compiler, some good IDE or linter may still [report a warning](https://github.com/gajus/eslint-plugin-jsdoc#eslint-plugin-jsdoc-rules-require-throws) if you try to disregard the function that throws...
```js
/**
* @throws {Error}
*/
function someFunc() {
if (Math.random() < 0.5) throw Error();
}
someFunc();
```
[![enter image description here](https://i.stack.imgur.com/3dQ36.png)](https://i.stack.imgur.com/3dQ36.png)
Upvotes: 5 <issue_comment>username_6: Not TypeScript, but [Hegel](https://hegel.js.org/) might be of interest which is another type-checker for JavaScript, and has this feature. You'd write:
```
function Test(test: boolean): boolean | $Throws {
if (test)
return false;
throw new Exception();
}
```
See <https://hegel.js.org/docs/magic-types#throwserrortype>
Upvotes: 0 <issue_comment>username_7: This seems like a interesting PR to follow regarding this topic <https://github.com/microsoft/TypeScript/pull/40468>
This PR introduces:
* A new type-level expression: throw type\_expr. Currently throw type
only throws when it is being instantiated.
* A new intrinsic type
TypeToString to print a type
Upvotes: 1 <issue_comment>username_8: As indicated in other answers, in typescript, the return type of a fallible operation is `never`. There is no way to mark a function as throws however you can use a utility type to make it more discernible:
```
type Result = OK | never;
```
Or you can make it even more noticeable:
```
type Result = OK | Error;
```
Again, these are for the eyes only, no way to enforce try/catch block.
If you want to force handling of errors, use a promise. Linters can cath unhandled promises. "typescript-eslint" has "No floating promises" rule.
<https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/eslint-plugin/docs/rules/no-floating-promises.md>
Also some runtimes emit errors when there is an unhandled promise.
Upvotes: 2 <issue_comment>username_9: Coming from a functional background, I prefer to specify expected errors (aka checked exceptions) in the return type. Typescript unions and type guards make this simple:
```
class ValidationError {
constructor(readonly message: string) {}
static isInstance(err: unknown): err is ValidationError {
if (err === undefined) return false
if (typeof err !== 'object') return false
if (err === null) return false
return err instanceof ValidationError
}
}
function toInt(num: string): number | ValidationError {
const result = Number.parseInt(num)
if (result === undefined) return new ValidationError(`Invalid integer ${num}`)
return result
}
// caller
const result = toInt("a")
if (ValidationError.isInstance(result))
console.log(result.message)
else
console.log(`Success ${result}`)
```
This way, the function signature highlights the potential error to other developers. More importantly the IDE & transpiler will force developers to deal with them (in most cases). For example this will fail:
```
const result = toInt("a")
const doubled = result * 2
error TS2362: The left-hand side of an arithmetic operation must be of type 'any', 'number', 'bigint' or an enum type
```
Upvotes: 1 |
2018/03/22 | 1,120 | 4,202 | <issue_start>username_0: In my specific `TPersistent` classes I'd like to provide a `Clone` function, which returns an independent copy of the object.
Is it possible to make this work correctly with descendants, without implementing the `Clone` function in each and every descendant?
This is not about cloning any unknown fields or deep cloning (which could be done using RTTI). In my minimal example below, you can see where I would want to put the `Clone` function.
Since it uses `Assign()` to copy the data, it would work with any descendant. The problem is the constructor, see comments. How do I call the correct constructor of that descendant? If that's very hard to do, it's okay to assume that none of the descendants override the constructor without overriding `Clone`, too.
```
program Test;
uses System.SysUtils, System.Classes;
type
TMyClassBase = class abstract(TPersistent)
public
constructor Create; virtual; abstract;
function Clone: TMyClassBase; virtual; abstract;
end;
TMyClassBase = class abstract(TMyClassBase)
private
FValue: T;
public
constructor Create; overload; override;
function Clone: TMyClassBase; override;
procedure Assign(Source: TPersistent); override;
property Value: T read FValue write FValue;
end;
TMyClassInt = class(TMyClassBase)
public
function ToString: string; override;
end;
TMyClassStr = class(TMyClassBase)
public
function ToString: string; override;
end;
constructor TMyClassBase.Create;
begin
Writeln('some necessary initialization');
end;
procedure TMyClassBase.Assign(Source: TPersistent);
begin
if Source is TMyClassBase then FValue:= (Source as TMyClassBase).FValue
else inherited;
end;
function TMyClassBase.Clone: TMyClassBase;
begin
{the following works, but it calls TObject.Create!}
Result:= ClassType.Create as TMyClassBase;
Result.Assign(Self);
end;
function TMyClassInt.ToString: string;
begin
Result:= FValue.ToString;
end;
function TMyClassStr.ToString: string;
begin
Result:= FValue;
end;
var
ObjInt: TMyClassInt;
ObjBase: TMyClassBase;
begin
ObjInt:= TMyClassInt.Create;
ObjInt.Value:= 42;
ObjBase:= ObjInt.Clone;
Writeln(ObjBase.ToString);
Readln;
ObjInt.Free;
ObjBase.Free;
end.
```
The output is
```
some necessary initialization
42
```
So, the correct class came out, it works correctly in this minimal example, but unfortunately, my *necessary initialization* wasn't done (should appear twice).
I hope I could make it clear and you like my example code :) - I'd also appreciate any other comments or improvements. **Is my `Assign()` implementation ok?**<issue_comment>username_1: This seems to do it:
```
function TMyClassBase.Clone: TMyClassBase;
begin
{create new instance using empty TObject constructor}
Result:= ClassType.Create as TMyClassBase;
{execute correct virtual constructor of descendant on instance}
Result.Create;
{copy data to instance}
Result.Assign(Self);
end;
```
However, I've never seen that before - it feels very, very wrong...
I verified that it correctly initializes data of the target object and really calls the descendants constructor *once*. I see no problem, there is also no memory leak reported. Tested using Delphi 10.2.2. Please comment :)
Upvotes: -1 <issue_comment>username_2: You don't need to make the non-generic base class constructor abstract. You can implement the clone there, because you have a virtual constructor.
Furthermore you don't need to make the `Clone` method virtual.
```
type
TMyClassBase = class abstract(TPersistent)
public
constructor Create; virtual; abstract;
function Clone: TMyClassBase;
end;
...
type
TMyClassBaseClass = class of TMyClassBase;
function TMyClassBase.Clone: TMyClassBase;
begin
Result := TMyClassBaseClass(ClassType).Create;
try
Result.Assign(Self);
except
Result.DisposeOf;
raise;
end;
end;
```
Note that `ClassType` returns `TClass`. We cast it to `TMyClassBaseClass` to make sure that we call your base class virtual constructor.
I also don't see why you made `TMyClassBase` abstract and derived specifications from it. You should be able to implement everything you need in the generic class.
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,161 | 3,161 | <issue_start>username_0: This code is supposed to iterate over the list of lists and return the entire list that contains the smallest value. I have already identified that it keeps returning the list at index[0], but I cannot figure out why. Any help, or even hints, would be greatly appreciated.
```
def list_with_min(list_of_lists):
m = 0
for i in range(len(list_of_lists)-1):
list_i = list_of_lists[m]
min_list = min(list_of_lists[m])
if min_list < list_i[0]:
m = i
answer = list_of_lists[m]
return answer
print(list_with_min([[9, 10, 15], [1, 8, 4], [-3, 7, 8]]))
# [9, 10, 15]--------> should be [-3, 7, 8]
print(list_with_min([[5], [9], [6], [2], [7], [10], [72]]))
# [5]----------------> should be [2]
print(list_with_min([[-2, 6, 9], [-9, 6, 9], [4, 8, 2], [5, -2]]))
# [-2, 6, 9]---------> should be [[-2, 6, 9], [5, -2]] (I assume two lists with the same minimum value should both be returned?)
```<issue_comment>username_1: >
> EDIT: A more python way could be this:
>
>
>
> ```
> def list_with_min(l):
> min_vals = [min(x) for x in l]
> return l[min_vals.index(min(min_vals))]
>
> ```
>
>
A bit bulky but it works...
```
l=[[9, 10, 15], [1, 8, 4], [-3, 7, 8]]
def list_with_min(l):
m = min(l[0])
for i in l[1:]:
m = min(i) if min(i) < m else m
for i in l:
if m in i:
return i
print(list_with_min(l))
```
Output:
```
[-3, 7, 8]
```
Upvotes: 0 <issue_comment>username_2: Your condition just makes no sense. You're checking
```
if min_list < list_i[0]:
```
which means, if the smallest value of list\_i is less than the first value of list\_i.
I don't think you'd ever want to compare to just list\_i[0]. You need to store min\_list across loops, and compare to that.
Upvotes: -1 <issue_comment>username_3: you can get this in one line with a list comprehension (ive added three though to help you work through the logic), it deals with duplicates differently however:
```
#for each list return a list with the minimum and the list
mins_and_lists = [[min(_list), _list] for _list in lists]
#find the minimum one
min_and_list = min([[min(_list), _list] for _list in lists])
#parse the result of the minimum one list
minimum, min_list = min([[min(_list), _list] for _list in lists])
```
if you want to handle duplicate minimums by returning both then:
```
dup_mins = [_list for _min, _list in mins_and_lists if _min == minimum]
```
Upvotes: 0 <issue_comment>username_4: You can provide a key to the function `min`, that is a function used for comparison. It turns out that here you want key to be the function `min` itself.
```
list_of_lists = [[9, 10, 15], [1, 8, 4], [-3, 7, 8]]
min(list_of_lists, key=min) # [-3, 7, 8]
```
This does not return multiple minima, but can be improved to do so.
```
list_of_lists = [[9, 10, 15], [1, -3, 4], [-3, 7, 8]]
min_value = min(map(min, list_of_lists))
[lst for lst in list_of_lists if min(lst) == min_value] # [[1, -3, 4], [-3, 7, 8]]
```
Upvotes: 2 <issue_comment>username_5: You could also use this:
```
min([ (min(a),a) for a in list_of_lists ])[1]
```
Upvotes: 0 |
2018/03/22 | 520 | 1,162 | <issue_start>username_0: I have a data frame:
```
a <- c(1,2,3,4,5,6)
b <- c(1,2,1,2,1,4)
c <- c("A", "B", "C", "D", "E", "F")
df <- data.frame(a,b,c)
```
What I want to do, is create another vector `d`, which contains the value of `c` in the row of `a` which matches each value of `b`
So my new vector would look like this:
```
d <- c("A", "B", "A", "B", "A", "D")
```
As an example, the final value of `b` is `4`, which matches with the 4th row of `a`, so the value of `d` is the 4th row of `c`, which is `"D"`.<issue_comment>username_1: If *a* and *b* are both lists with integer values you can use them directly.
```
d <- c[b[a]]
d
[1] "A" "B" "A" "B" "A" "D"
```
if *a* is a regular integer sequence along *c* you can simply call *c* from *b*.
```
c[b]
[1] "A" "B" "A" "B" "A" "D"
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Another option is to convert to factor and use it as:
```
factor(a, labels = c)[b]
#[1] A B A B A D
```
**OR**
```
as.character(factor(a, labels = c)[b])
#[1] "A" "B" "A" "B" "A" "D"
```
**data**
```
a <- c(1,2,3,4,5,6)
b <- c(1,2,1,2,1,4)
c <- c("A", "B", "C", "D", "E", "F")
```
Upvotes: 0 |
2018/03/22 | 1,012 | 3,700 | <issue_start>username_0: I have a Cosmos DB stored procedure in which I am passing list of comma saperated Ids. I need to pass those IDs to in query. when I'm passing one value to the parameter then its working fine but not with more that one value.
It would be great if any one could help here.
below is the code of the stored procedure:
```
function getData(ids) {
var context = getContext();
var coll = context.getCollection();
var link = coll.getSelfLink();
var response = context.getResponse();
var query = {query: "SELECT * FROM c where c.vin IN (@ids)", parameters:
[{name: "@ids", value: ids}]};
var requestOptions = {
pageSize: 500
};
var run = coll.queryDocuments(link, query, requestOptions, callback);
function callback(err, docs) {
if (err) throw err;
if (!docs || !docs.length) response.setBody(null);
else {
response.setBody(JSON.stringify(docs));
}
}
if (!run) throw new Error('Unable to retrieve the requested information.');
}
```<issue_comment>username_1: Please refer to my sample js code, it works for me.
```
function sample(ids) {
var collection = getContext().getCollection();
var query = 'SELECT * FROM c where c.id IN ('+ ids +')'
console.log(query);
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
query,
function (err, feed, options) {
if (err) throw err;
if (!feed || !feed.length) getContext().getResponse().setBody('no docs found');
else {
for(var i = 0;i
```
Parameter : `'1','2','3'`
Hope it helps you.
Upvotes: 0 <issue_comment>username_2: This is how you can do it:
Inside the stored procedure
1. parse your one parameter into an array using the split function
2. loop through the array and
a) build the parameter name / value pair and push it into the parameter array used by the query later
b) use the parameter name to build a string for use inside the parenthesis of the IN statement
3. Build the query definition and pass it to the collection.
Example
=======
This is how the value of the parameter looks: "abc,def,ghi,jkl"
If you are going to use this, replace "stringProperty" with the name of the property you are querying against.
```
// SAMPLE STORED PROCEDURE
function spArrayTest(arrayParameter) {
var collection = getContext().getCollection();
var stringArray = arrayParameter.split(",");
var qParams = [];
var qIn = "";
for(var i=0; i
```
Upvotes: 1 <issue_comment>username_3: For arrays, you should use `ARRAY_CONTAINS` function:
```
var query = {
query: "SELECT * FROM c where ARRAY_CONTAINS(@ids, c.vin)",
parameters: [{name: "@ids", value: ids}]
};
```
Also, it is possible that, as stated in [this doc](https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-write-stored-procedures-triggers-udfs#arrays-as-input-parameters-for-stored-procedures), your `@ids` array is being sent as string
>
> When defining a stored procedure in Azure portal, input parameters are always sent as a string to the stored procedure. Even if you pass an array of strings as an input, the array is converted to string and sent to the stored procedure. To work around this, you can define a function within your stored procedure to parse the string as an array
>
>
>
So you might need to parse it before querying:
```
function getData(ids) {
arr = JSON.parse(ids);
}
```
Related:
[How can I pass array as a sql query param for cosmos DB query](https://stackoverflow.com/questions/48003632/how-can-i-pass-array-as-a-sql-query-param-for-cosmos-db-query)
<https://github.com/Azure/azure-cosmosdb-node/issues/156>
Upvotes: 2 |
2018/03/22 | 512 | 1,742 | <issue_start>username_0: Blender runs its own Python. When I write Python code like this
```py
import bpy
print(bpy.data.objects)
```
for Blender it runs fine with
```none
$ blender --background --python my_code.py
```
But when I want to document my code with Sphinx and the *autodoc* extension it cannot import `bpy` because it is unknown outside Blender.
```none
$ cd doc/
$ make html
[...]
ModuleNotFoundError: No module named 'bpy'
```
How can I create documentation output for my Blender code with Sphinx?<issue_comment>username_1: The solution involves modifying the Makefile generated by `sphinx-quickstart` as well as writing a little wrapper script around `sphinx.cmd.build`:
Replace the `SPHINXBUILD` variable in the Makefile:
```make
SPHINXBUILD = blender --background --python blender_sphinx.py --
```
and make sure that the actual recipe line at the end of the file contains the `-M` flag.
Then create a file `blender_sphinx.py` in the same directory as the Makefile with the following contents:
```py
import sys
from sphinx.cmd import build
first_sphinx_arg = sys.argv.index('-M')
build.make_main(sys.argv[first_sphinx_arg:])
```
Now if you run
```none
$ make html
```
from the `doc/` directory it will allow *autodoc* to find `bpy` and import all the modules.
Upvotes: 1 <issue_comment>username_2: There's another option: Using a fake (i.e. empty) bpy module: <https://github.com/nutti/fake-bpy-module>
You can install this simply via pip:
```
pip install fake-bpy-module-
```
Afterwards, your Sphinx (i.e. the Python build used by Sphinx) will import these modules instead of the blender modules.
Note: I'm not the author and I don't know how this module works with Blender's GPL license.
Upvotes: 0 |
2018/03/22 | 1,286 | 4,323 | <issue_start>username_0: I'm new to UI. I do have confusion between `$scope`'s in AngularJS. Please refer below snippet.
```
var mainApp = angular.module("mainApp", []);
mainApp.controller(['$scope', function($scope) {
$scope.name = "John";
}]);
```
So, what's the difference between `$scope` and `function($scope)`? Also how can we relate both? Is it required to have `$scope` parameter? Please explain me with an example. I really appreciate that.
Thanks,
John<issue_comment>username_1: 1.When you apply Minification of Following Angular JS code:
```
var mainApp = angular.module("mainApp", []);
mainApp.controller(['$scope','$log', function($scope,$log) {
$scope.name = "John";
$log.log("John");
}]);
```
Minified Version :
```
var mainApp=angular.module("mainApp",
[]);mainApp.controller(["$scope","$log",function(n,o)
{n.name="John",o.log("John")}]);
```
2.When you apply Minification of Following Angular JS code:
```
var mainApp = angular.module("mainApp", []);
mainApp.controller(function($scope,$log) {
$scope.name = "John";
$log.log("John");
});
```
Minified Version :
```
var mainApp=angular.module("mainApp",[]);mainApp.controller(function(n,a)
{n.name="John",a.log("John")});
```
3.When you apply Minification of Following Angular JS code:
```
var mainApp = angular.module("mainApp", []);
mainApp.controller(function($log,$scope) {
$scope.name = "John";
$log.log("John");
});
```
Minified Version :
```
var mainApp=angular.module("mainApp",[]);mainApp.controller(function(n,a)
{n.name="John",a.log("John")});
```
You will Notice in Ex-2 and Ex-3 that you have interchanged the Dependency place of $scope and $log then also your minified version is the same ,this will give you `dependency Injection error` ,so we place a String value as String Value can't be minified as you can see in Ex-1.
It is not required to have $scope each time you define your controller but $scope provides important functionality like `binding the HTML (view) and the JavaScript (controller).` .
<https://docs.angularjs.org/guide/scope>
Upvotes: 1 <issue_comment>username_2: >
> what's the difference between `$scope` and `function($scope)`?
>
>
>
When you do
```
mainApp
.controller(
['$scope', //line 1
function($scope) //line 2
{
}
]);
```
* In `line 1` it refers to [`$scope`](https://docs.angularjs.org/guide/scope), which is an object that refers to the application model
* In `line 2` it is the variable (conveniently called *$scope* too) in which the (mentioned above) *$scope* object is injected. This variable can have any other name, `$scope` is used as a way to keep a suggestive reference through the whole code.
For instance, your example would work too if I change its name to `myFunnyScope` like this:
```
var mainApp = angular.module("mainApp", []);
mainApp.controller(['$scope', function(myFunnyScope) {
myFunnyScope.name = "John";
}]);
```
>
> Also how can we relate both?
>
>
>
Taking as reference my previously posted snippet, you can tell the *$scope* object is being injected in the `myFunnyScope` variable, it means you use `myFunnyScope` as if it were `$scope` itself.
>
> Is it required to have `$scope` parameter?
>
>
>
As long as you need to get access to all benefits provided by the mentioned *$scope* object, when you do [minification](https://developers.google.com/speed/docs/insights/MinifyResources) it is required to inject the object (`[$scope, ...`) into the holder (`function($scope) { ...`) in order to not get the AngularJS application broken. Otherwise, no, you don't need to inject the object, but then you have to explicitly call it `$scope` in the function parameter so AngularJS knows it has to inject the `$scope` object inside it. This rule applies not only to `$scope`, but to all other AngularJS related services, factories, etc such as `$timeout`, `$window$`, `$location`, etc.
You might want to read about the [AngularJS injection mechanism](https://docs.angularjs.org/guide/di) and consider using the [controller as](https://johnpapa.net/angularjss-controller-as-and-the-vm-variable/) syntax for reasons explained [here](https://stackoverflow.com/questions/40952006/angularjs-scope-vs-this-what-is-the-use-of-scope/40952350#40952350) if you do not want to use `$scope` directly.
Upvotes: 0 |
2018/03/22 | 1,268 | 4,295 | <issue_start>username_0: I have a Cross Platform request call, and it gets the data inside the xhr.**onload** function, how do I return that data to the main function?
```
makeCorsRequest(email) {
var xhr = this.createCORSRequest(email);
xhr.send()
xhr.onload = function() {
var text = xhr.responseText;
return text //I want to return this on the makeCorsRequest function after the .send() is done
};
}
```<issue_comment>username_1: 1.When you apply Minification of Following Angular JS code:
```
var mainApp = angular.module("mainApp", []);
mainApp.controller(['$scope','$log', function($scope,$log) {
$scope.name = "John";
$log.log("John");
}]);
```
Minified Version :
```
var mainApp=angular.module("mainApp",
[]);mainApp.controller(["$scope","$log",function(n,o)
{n.name="John",o.log("John")}]);
```
2.When you apply Minification of Following Angular JS code:
```
var mainApp = angular.module("mainApp", []);
mainApp.controller(function($scope,$log) {
$scope.name = "John";
$log.log("John");
});
```
Minified Version :
```
var mainApp=angular.module("mainApp",[]);mainApp.controller(function(n,a)
{n.name="John",a.log("John")});
```
3.When you apply Minification of Following Angular JS code:
```
var mainApp = angular.module("mainApp", []);
mainApp.controller(function($log,$scope) {
$scope.name = "John";
$log.log("John");
});
```
Minified Version :
```
var mainApp=angular.module("mainApp",[]);mainApp.controller(function(n,a)
{n.name="John",a.log("John")});
```
You will Notice in Ex-2 and Ex-3 that you have interchanged the Dependency place of $scope and $log then also your minified version is the same ,this will give you `dependency Injection error` ,so we place a String value as String Value can't be minified as you can see in Ex-1.
It is not required to have $scope each time you define your controller but $scope provides important functionality like `binding the HTML (view) and the JavaScript (controller).` .
<https://docs.angularjs.org/guide/scope>
Upvotes: 1 <issue_comment>username_2: >
> what's the difference between `$scope` and `function($scope)`?
>
>
>
When you do
```
mainApp
.controller(
['$scope', //line 1
function($scope) //line 2
{
}
]);
```
* In `line 1` it refers to [`$scope`](https://docs.angularjs.org/guide/scope), which is an object that refers to the application model
* In `line 2` it is the variable (conveniently called *$scope* too) in which the (mentioned above) *$scope* object is injected. This variable can have any other name, `$scope` is used as a way to keep a suggestive reference through the whole code.
For instance, your example would work too if I change its name to `myFunnyScope` like this:
```
var mainApp = angular.module("mainApp", []);
mainApp.controller(['$scope', function(myFunnyScope) {
myFunnyScope.name = "John";
}]);
```
>
> Also how can we relate both?
>
>
>
Taking as reference my previously posted snippet, you can tell the *$scope* object is being injected in the `myFunnyScope` variable, it means you use `myFunnyScope` as if it were `$scope` itself.
>
> Is it required to have `$scope` parameter?
>
>
>
As long as you need to get access to all benefits provided by the mentioned *$scope* object, when you do [minification](https://developers.google.com/speed/docs/insights/MinifyResources) it is required to inject the object (`[$scope, ...`) into the holder (`function($scope) { ...`) in order to not get the AngularJS application broken. Otherwise, no, you don't need to inject the object, but then you have to explicitly call it `$scope` in the function parameter so AngularJS knows it has to inject the `$scope` object inside it. This rule applies not only to `$scope`, but to all other AngularJS related services, factories, etc such as `$timeout`, `$window$`, `$location`, etc.
You might want to read about the [AngularJS injection mechanism](https://docs.angularjs.org/guide/di) and consider using the [controller as](https://johnpapa.net/angularjss-controller-as-and-the-vm-variable/) syntax for reasons explained [here](https://stackoverflow.com/questions/40952006/angularjs-scope-vs-this-what-is-the-use-of-scope/40952350#40952350) if you do not want to use `$scope` directly.
Upvotes: 0 |
2018/03/22 | 463 | 1,765 | <issue_start>username_0: ```
ALTER PROCEDURE GetVendor_RMA_CreditMemo
(@HasCreditMemoNo INT)
BEGIN
SELECT
*
FROM
(SELECT
CreditMemoNumber,
CASE WHEN CreditMemoNumber != ''
THEN 1
ELSE 0
END AS HasCreditMemoNo
FROM
XYZ) as C
WHERE
(C.HasCreditMemoNo = @HasCreditMemoNo OR @HasCreditMemoNo = -1)
END
```
`CreditMemoNumber` is a `varchar` column
I want to achieve this:
```
CASE
WHEN @HasCreditMemoNo = 0
THEN -- select all rows with no value in CreditMemoNumber Column,
WHEN @HasCreditMemoNo = 1
THEN -- all rows that has some data,
WHEN @HasCreditMemoNo = -1
THEN -- everything regardless..
```<issue_comment>username_1: Would this work for you? I'm not sure if it would improve your performance. You may be better off writing an `if else if else` statement and three separate select statements with an index on the CreditMemoNumber column.
```
ALTER PROCEDURE GetVendor_RMA_CreditMemo(@HasCreditMemoNo int)
BEGIN
select
CreditMemoNumber,
case when CreditMemoNumber != '' then 1 else 0 end as HasCreditMemoNo
from XYZ
where
(@HasCreditMemoNo = 0 and (CreditMemoNumber is null or CreditMemoNumber = ''))
or (@HasCreditMemoNo = 1 and CreditMemoNumber != '')
or (@HasCreditMemoNo = -1)
END
```
Upvotes: 0 <issue_comment>username_2: You can't do this kind of thing with a CASE.
The correct way to do it is with OR:
```
WHERE (@HasCreditMemoNo = 0 AND {no value in CreditMemoNumber Column})
OR
(@HasCreditMemoNo = 1 AND {all rows that has some data})
OR
(@HasCreditMemoNo = -1)
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 768 | 2,707 | <issue_start>username_0: ```
function a() {
console.log('A!');
function b(){
console.log('B!');
}
return b;
}
```
When I do like this,
```
var s = a();
```
Here is the output:
```
A!
```
When I do like this,
```
a();
```
The output is as below:
```
A!
ƒ b(){
console.log('B!');
}
```
I wonder why there the outputs are different.<issue_comment>username_1: Because the function `a` returns a function `b`.
So, doing this `console.log(a())` will print the source code of function `b`.
If you want to execute the returned function, just call it:
```js
function a() {
console.log('A!');
function b() {
console.log('B!');
}
return b;
}
var s = a()
console.log(s);
console.log("--------------------------------------")
s();
```
Upvotes: 2 <issue_comment>username_2: That happens because you're returning `b` itself instead of `b`'s result (`b()`).
And, because you've ran the function in the console, the result of `a` (that is, the function `b`) will be displayed in it's "code form".
Upvotes: 0 <issue_comment>username_3: It took me a while to understand your question, but for my defense it looks that there are multiple questions into one, dusted with conflicting statements, omissions and typos ("by()" instead of "b()" ?). To put it simply, we can feel the confusion in your mind :-)
For example, you talk about parameters, but none of your functions takes parameters. Moreover, you seem to believe that you invoke `b` at some point, but there is nothing like `b();` in your code. Finally, you never tell us, but you are coding directly into the browser's console, right ?
That being said, here is how I would summarize the main question :
>
> Why `var s = a();` and `a();` have different outputs ?
>
>
>
You said that `var s = a();` prints `A!`, while `a();` prints `A!` plus function `b`. Actually, `a();` does not print function `b`, this additional output comes from the browser's console. Keep in mind that there are two types of outputs, the ones that you ask for explicitly with `console.log(...)`, and the ones produced by the console itself. As [epascarello](https://stackoverflow.com/users/14104/epascarello) stated, "in the console [...] the last statement is outputted". If you write `a();` into the console and press `ENTER`, the last statement is `a()`, and since `a` returns `b`, the console outputs function `b`. If you write `var s = a();`, `a` still returns `b` but the console doesn't tell you because `a()` is not the last statement anymore. I'm not sure but in this case I would say that `var` is the last statement, and since `var` returns nothing, the console outputs nothing.
Upvotes: 0 |
2018/03/22 | 696 | 2,332 | <issue_start>username_0: I launch a tensorflow task on ML Engine and after about 2 minutes I keep getting an error message "**The replica master 0 exited with a non-zero status of 1.**"
(The task incidentally runs fine with ml-engine local.)
Question: Is there any place or log file where can I see further information on what happened?
The logs viewer just gives the following:
```
{
insertId: "ibal72g1rxhr63"
logName: "projects/**-***-ml/logs/ml.googleapis.com%2Fcnn180322_170649"
receiveTimestamp: "2018-03-22T17:08:38.344282172Z"
resource: {
labels: {
job_id: "cnn180322_170649"
project_id: "**-***-ml"
task_name: "service"
}
type: "ml_job"
}
severity: "ERROR"
textPayload: "The replica master 0 exited with a non-zero status of 1."
timestamp: "2018-03-22T17:08:38.344282172Z"
}
```
Thanks in advance for any pointers!<issue_comment>username_1: Stackdriver agents can monitor many metrics and give details about ML engine. For more details, please refer [here](https://cloud.google.com/monitoring/api/metrics_gcp#gcp-ml). AFAIK, Normal event logging and Stackdriver agents are the only tools to monitor the ML jobs on GCP.
Please note that Python 2.7 which is used in Tensorflow works with relative imports. It is possible that you locally used Python 3.4 which worked with absolute imports. That is why it worked locally but not on Google Cloud. You can refer to [this post](http://stackoverflow.com/questions/30249119/difference-in-package-importing-between-python-2-7-and-3-4) to modify your import statement. So, if you include the line `“from __future__ import absolute_import”` at the top of your code, before the line “import tensorflow as tf” , your code may work.
Upvotes: 0 <issue_comment>username_2: The solution to the apparent lack of log files was missing permission to write to the logs.
Under IAM & admin, adding the **Logs Writer** role the account `<EMAIL>` solved the problem and enables the master and workers to write log messages to Stackdriver as expected.
For a similar discussion and some additional information, see [Stackdriver logs not available for Cloud ML jobs since migration to V2](https://stackoverflow.com/questions/41168116)
Thanks to all for giving input!
Upvotes: 3 [selected_answer] |
2018/03/22 | 570 | 2,043 | <issue_start>username_0: I have a file like the one below
```
HTTP/1.1 401 Unauthorized
Server: WinREST HTTP Server/1.0
Connection: Keep-Alive
Content-Type: text/html
Content-Length: 89
WWW-Authenticate: ServiceAuth realm="WinREST", nonce="1828HvF7EfPnRtzSs/h10Q=="
UnauthorizedError 401:
Unauthorized
```
I need to get the nonce, that is just the 1828HvF7EfPnRtzSs/h10Q== in front og the word nonce=
I was using
```
grep -oP 'nonce="\K[^"]+' response.xml
```
but the P parameter no longer works.
how can I do the same with awk or even Grep with another parameter, maybe ?
Thanks in advance<issue_comment>username_1: Stackdriver agents can monitor many metrics and give details about ML engine. For more details, please refer [here](https://cloud.google.com/monitoring/api/metrics_gcp#gcp-ml). AFAIK, Normal event logging and Stackdriver agents are the only tools to monitor the ML jobs on GCP.
Please note that Python 2.7 which is used in Tensorflow works with relative imports. It is possible that you locally used Python 3.4 which worked with absolute imports. That is why it worked locally but not on Google Cloud. You can refer to [this post](http://stackoverflow.com/questions/30249119/difference-in-package-importing-between-python-2-7-and-3-4) to modify your import statement. So, if you include the line `“from __future__ import absolute_import”` at the top of your code, before the line “import tensorflow as tf” , your code may work.
Upvotes: 0 <issue_comment>username_2: The solution to the apparent lack of log files was missing permission to write to the logs.
Under IAM & admin, adding the **Logs Writer** role the account `<EMAIL>` solved the problem and enables the master and workers to write log messages to Stackdriver as expected.
For a similar discussion and some additional information, see [Stackdriver logs not available for Cloud ML jobs since migration to V2](https://stackoverflow.com/questions/41168116)
Thanks to all for giving input!
Upvotes: 3 [selected_answer] |
2018/03/22 | 596 | 2,212 | <issue_start>username_0: I open a project, but want to create a module to reference in WebStorm, (mostly so I can do work in 2 projects instead of having 2+ instance of WebStorm open).
In IntelliJ Ultimate or whatnot, it has a modules button. It Kinda looks like modules does not exist for me, or at least I have not see anything.
I have 2 folders which are siblings to each other representing the 2 separate projects I wanted to open in 1 instance of WebStorm.
Where can I find this information for WebStorm?
A lot of the googling was talking about modules but I didn't see that option, and I believe I noticed some posts mentioning this is not a thing in WebStorm.
Thoughts? Guidance?<issue_comment>username_1: WebStorm/PhpStorm project consists of a **single module only** (`WEB_MODULE` type).
---
WebStorm cannot open more than one project in single frame. <https://youtrack.jetbrains.com/issue/WEB-7968> -- watch this ticket (star/vote/comment) to get notified on any progress.
At the same time it's possible in PhpStorm: <https://www.jetbrains.com/help/phpstorm/opening-multiple-projects.html#d197136e31>
But it's still will not be full "two *separate* projects with separate settings" AFAIK. It's more of a "attaching 2nd project so you can see and edit those files in the same frame".
---
Question is: why exactly you need this? To access files of a second project? If so -- just add such folder(s) as Additional Content Root -- it will be listed as another node in the Project View panel and files will be treated as part of the project itself.
<https://www.jetbrains.com/help/phpstorm/configuring-content-roots.html>
Upvotes: 3 [selected_answer]<issue_comment>username_2: in Webstrom 2019.2 is it now avilable to open *attached project* to the current project.
>
> **Open multiple projects in one window**
> When you have a project opened in WebStorm and want to open another one, you can now attach this second project to the opened one, so that you can see both of them in the same IDE window. If you want to close the attached project, right-click its root in the Project view and select Remove from Project View.
> <https://www.jetbrains.com/webstorm/whatsnew/>
>
>
>
Upvotes: 0 |
2018/03/22 | 1,489 | 5,264 | <issue_start>username_0: I have a simple model:
```
export class Profile extends ServerData {
name: string;
email: string;
age: number;
}
```
When I am make a call to the server (Angular 4, $http) I often get the following response:
```
{
name: string;
email: string;
}
```
The `age` property is missing.
Is there any way to use my model and create a default age in case it is missing?
I would prefer not to have to create 2 separate models if possible.
I don't want to create the age as an optional property - I need it, even if it is just with an incorrect default.
UPDATE:
This is the call I making to the server:
results-manager.component.ts:
```
this.resultsManagerService.getResults(this.config.request.action, this.pagingAndSorting, this.args).subscribe(
results => {
this.results = this.results.concat(results as Profile[]);
```
results-manager.service.ts:
```
getResults(action: string, pagingAndSorting: PagingAndSorting, args: string[]): Observable {
return this.apiService.callServer(
pagingAndSorting,
action,
...args );
```
}
The request works and I receive the response, but even if I define the default values (as suggested by @msanford's answer) they get removed when I receive the response back in the component. Likewise if i add a constructor to the model (as per Mike Tung's answer).
It seems like the backend response is completely overwriting the model - not just assigning the values.
How can I get it to just assign the values to the model and not remove the values it does not return?<issue_comment>username_1: You can't quite do this automatically, but you could set defaults for a function (possibly the class constructor):
```
function makeProfile({ name = '', email = '', age = 0 }) {
const person = new Profile;
person.name = name;
person.email = email;
person.age = age;
return person;
}
```
Now if you call `makePerson({ name: 'name', email: 'email' })` it will return a `Person` with `age = 0`.
Upvotes: 2 <issue_comment>username_2: You don't need to do much aside from use a constructor.
```
export class Profile extends ServerData {
name: string;
email: string;
age: number;
constructor(name: string = '', email: string = '', age: number = null) {
this.name = name;
this.email = email;
this.age = age;
}
}
```
This will create default params for things you are missing.
Upvotes: 2 <issue_comment>username_3: Yes, *easily*, and you don't need to add a class constructor.
```
export class Profile extends ServerData {
name: string;
email: string;
age: number = 0;
}
```
The ability to define default values is one of the main things that differentiates a [`class`](https://www.typescriptlang.org/docs/handbook/classes.html) from an [`interface`](https://www.typescriptlang.org/docs/handbook/interfaces.html).
**For this to work** you need to call `new Profile()` somewhere in your code, otherwise a class instance won't be created and you won't have defaults set, because the above TypeScript will compile to the following JavaScript:
```
var Profile = /** @class */ (function () {
function Profile() {
this.age = 0;
}
return Profile;
}());
```
So just using it for type *assertion at compile-time* isn't sufficient to set a default at run-time.
See it in action in the [TypeScript Playground](https://www.typescriptlang.org/play/index.html#src=class%20Profile%20%7B%0D%0A%20%20name%3A%20string%3B%0D%0A%20%20email%3A%20string%3B%0D%0A%20%20age%3A%20number%20%3D%200%3B%0D%0A%7D%0D%0A%0D%0Alet%20p%20%3D%20new%20Profile()%3B%0D%0Aalert(p.age)%3B).
Upvotes: 6 <issue_comment>username_4: The reason you're seeing ***overwriting*** properties is due to [type erasure](https://github.com/Microsoft/TypeScript/wiki/FAQ#what-is-type-erasure) in TypeScript. TypeScript has no idea what types of objects are assigned to its variables during runtime. This would seem somewhat strange to you if you're coming not from a java / c# background.
Because in the end , it's just JavaScript. And JavaScript doesn't enforce strict type checking.
In order to ensure that your profile objects always have an age property, you could create your own objects then copy over the values you get from the response. This is the usual approach when it comes to wire format to domain object mapping.
To do this first create your domain model, a Profile class with a default age property in this case.
```
export class Profile {
constructor(
public name: string,
public email: string,
public age: number = 0) { }
}
```
Then map your response to the domain model.
```
this.resultsManagerService.getResults(this.config.request.action, this.pagingAndSorting, this.args).subscribe(
results => {
let res = (results as Profile[]).map((profile) => new Profile(profile.name, profile.email, profile.age));
this.results = this.results.concat(res);
});
```
Upvotes: 6 [selected_answer]<issue_comment>username_5: ```
export class Cell {
constructor(
public name: string = "",
public rowspan: number = 1,
public colspan: number = 1
){ }
}
export class Table {
public c1 = new Cell()
public c2 = new Cell()
public c3 = new Cell()
}
```
Upvotes: 0 |
2018/03/22 | 1,220 | 4,216 | <issue_start>username_0: I have a set of black png icons. When a user hover over them, I want the icons to become blue. How can I convert them to blue using css? I tried css filters but its not changing colors for me.
Here is the black icon
[![enter image description here](https://i.stack.imgur.com/ElzgZ.png)](https://i.stack.imgur.com/ElzgZ.png)
and here is how I want it to be on hover
[![enter image description here](https://i.stack.imgur.com/DXbOQ.png)](https://i.stack.imgur.com/DXbOQ.png)<issue_comment>username_1: You can't quite do this automatically, but you could set defaults for a function (possibly the class constructor):
```
function makeProfile({ name = '', email = '', age = 0 }) {
const person = new Profile;
person.name = name;
person.email = email;
person.age = age;
return person;
}
```
Now if you call `makePerson({ name: 'name', email: 'email' })` it will return a `Person` with `age = 0`.
Upvotes: 2 <issue_comment>username_2: You don't need to do much aside from use a constructor.
```
export class Profile extends ServerData {
name: string;
email: string;
age: number;
constructor(name: string = '', email: string = '', age: number = null) {
this.name = name;
this.email = email;
this.age = age;
}
}
```
This will create default params for things you are missing.
Upvotes: 2 <issue_comment>username_3: Yes, *easily*, and you don't need to add a class constructor.
```
export class Profile extends ServerData {
name: string;
email: string;
age: number = 0;
}
```
The ability to define default values is one of the main things that differentiates a [`class`](https://www.typescriptlang.org/docs/handbook/classes.html) from an [`interface`](https://www.typescriptlang.org/docs/handbook/interfaces.html).
**For this to work** you need to call `new Profile()` somewhere in your code, otherwise a class instance won't be created and you won't have defaults set, because the above TypeScript will compile to the following JavaScript:
```
var Profile = /** @class */ (function () {
function Profile() {
this.age = 0;
}
return Profile;
}());
```
So just using it for type *assertion at compile-time* isn't sufficient to set a default at run-time.
See it in action in the [TypeScript Playground](https://www.typescriptlang.org/play/index.html#src=class%20Profile%20%7B%0D%0A%20%20name%3A%20string%3B%0D%0A%20%20email%3A%20string%3B%0D%0A%20%20age%3A%20number%20%3D%200%3B%0D%0A%7D%0D%0A%0D%0Alet%20p%20%3D%20new%20Profile()%3B%0D%0Aalert(p.age)%3B).
Upvotes: 6 <issue_comment>username_4: The reason you're seeing ***overwriting*** properties is due to [type erasure](https://github.com/Microsoft/TypeScript/wiki/FAQ#what-is-type-erasure) in TypeScript. TypeScript has no idea what types of objects are assigned to its variables during runtime. This would seem somewhat strange to you if you're coming not from a java / c# background.
Because in the end , it's just JavaScript. And JavaScript doesn't enforce strict type checking.
In order to ensure that your profile objects always have an age property, you could create your own objects then copy over the values you get from the response. This is the usual approach when it comes to wire format to domain object mapping.
To do this first create your domain model, a Profile class with a default age property in this case.
```
export class Profile {
constructor(
public name: string,
public email: string,
public age: number = 0) { }
}
```
Then map your response to the domain model.
```
this.resultsManagerService.getResults(this.config.request.action, this.pagingAndSorting, this.args).subscribe(
results => {
let res = (results as Profile[]).map((profile) => new Profile(profile.name, profile.email, profile.age));
this.results = this.results.concat(res);
});
```
Upvotes: 6 [selected_answer]<issue_comment>username_5: ```
export class Cell {
constructor(
public name: string = "",
public rowspan: number = 1,
public colspan: number = 1
){ }
}
export class Table {
public c1 = new Cell()
public c2 = new Cell()
public c3 = new Cell()
}
```
Upvotes: 0 |
2018/03/22 | 1,016 | 3,812 | <issue_start>username_0: I have a tableview that contains 4 sections. In sections 2,3,and 4 I want to have a + button to add information to a "Saved" array. I have the logic setup for adding information, but I'm having issues with the tableview cells.
I don't want the + button to appear in section 0, since that's where we're adding the data. Here's my cellForRowAt method...
```
let cell = tableView.dequeueReusableCell(withIdentifier: "reuseIdentifier", for: indexPath) as! SchoolTableViewCell
// Configure the cell...
if indexPath.section == 0 {
cell.textLabel?.text = "Test"
cell.addFavoritesButton.removeFromSuperview()
} else if indexPath.section == 1 {
cell.textLabel?.text = Items.sharedInstance.elementaryList[indexPath.row]
} else if indexPath.section == 2 {
cell.textLabel?.text = Items.sharedInstance.intermediateList[indexPath.row]
} else if indexPath.section == 3 {
cell.textLabel?.text = Items.sharedInstance.highschoolList[indexPath.row]
}
return cell
```
This works great at first! But if I scroll down, more and more cells will remove the button. It's not limiting it to section 0 because of reusable cells.
Can anyone think of a better way to remove this button for the first section only?
[Screenshot of section 0](https://i.stack.imgur.com/zwjTe.png)
[Screenshot of section 1](https://i.stack.imgur.com/ksJ42.png)<issue_comment>username_1: First run show the cells correctly because of all cells are new instances of the cell class (without reusing) , but after scroll shown cells may be reused with a possibility that this reused cell be the one in section zero which you removed the button from it , You can try to show/hide it
```
if indexPath.section == 0 {
cell.textLabel?.text = "Test"
cell.addFavoritesButton.isHidden = true
}
else
{
cell.addFavoritesButton.isHidden = false
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You are forgetting that cells are reused. You need to deal, every time thru `cellForRowAt`, with the possibility that this cell already has the button from a previous use and should not have it in this use, or with the possibility that it lacks the button and needs it in this use.
For example, you cannot assume that just because the section is `1`, the cell has the button, because it might have been used in section `0` earlier and lacks the button now. You need, in that case, to *add* it. But you are not doing that.
Thus, for *every* branch of your logic, you must be explicit about whether to add or remove the button. If you are really going to add and remove it, that can get complicated. You would need to keep a copy of the button somewhere, so you can add it. You'd make sure you don't add it twice to the same cell. You'd make sure you don't try to remove it if it is already removed.
As has been suggested in another answer, the simpler way to deal with this is not to add and remove at all, but to make *visibility* of the button dependent on whether this section is `0`:
```
// do this in _every_ case
cell.addFavoritesButton.isHidden = (indexPath.section == 0)
```
That's a single line of code that does, much better, the thing you are trying to do.
Upvotes: 2 <issue_comment>username_3: Once you remove the button from the cell by calling `cell.addFavoritesButton.removeFromSuperview()`, it would not be added back again for you when the cell is reused. You should keep the button on the cell, but hide it with
```
cell.addFavoritesButton.isHidden = indexPath.section == 0
```
or add a new feature that lets end-users remove items from section zero, and change the picture on the button from `+` to `-`:
[![enter image description here](https://i.stack.imgur.com/nOPAR.png)](https://i.stack.imgur.com/nOPAR.png)
Upvotes: 1 |
2018/03/22 | 596 | 2,183 | <issue_start>username_0: ```
SELECT property.paon, property.saon, property.street, property.postcode, property.lastSalePrice, property.lastTransferDate,
epc.ADDRESS1, epc.POSTCODE, epc.TOTAL_FLOOR_AREA,
(
3959 * acos (
cos (radians(54.6921))
* cos(radians(property.latitude))
* cos(radians(property.longitude) - radians(-1.2175))
+ sin(radians(54.6921))
* sin(radians(property.latitude))
)
) AS distance
FROM property
RIGHT JOIN epc ON property.postcode = epc.POSTCODE AND CONCAT(property.paon, ', ', property.street) = epc.ADDRESS1
WHERE property.paon IS NOT NULL AND epc.TOTAL_FLOOR_AREA > 0
GROUP BY CONCAT(property.paon, ', ', property.street)
HAVING distance < 1.4
ORDER BY property.lastTransferDate DESC
LIMIT 10
```
table property has 22 million rows, table epc has 14 million rows
Without the GROUP BY and ORDER BY, it runs fast.
Property table often has the same property multiple times, but I need to select the one with the most current lastTransferDate.
If there is a better approach I'm open to it
Here is the explain of query:
[Query-Explain-Image](https://i.stack.imgur.com/OtG5N.png)<issue_comment>username_1: Do you control the database? If you do, you could try adding indexes on the address and postcode columns (anything in the join clause). That should probably speed up the query.
Edit: reread your question.
If the GROUP BY and ORDER BY clauses are slowing it down, I would try adding indexes on the columns referenced in those clauses.
Upvotes: 0 <issue_comment>username_2: You can do a few things:
* Create a new column so you don't need to use CONCAT `CONCAT(property.paon, ', ', property.street)` in the `GROUP BY` and the `JOIN` (this will speed it up a lot!)
* As username_1 says you need to create indexes at the right spot. (property postcode and the newly created column, and epc postcode and address)
* Remove the `HAVING` with `epc.TOTAL_FLOOR_AREA > 0` and add it to the `WHERE`
If you need more help, share en EXPLAIN of your query with us.
Upvotes: 1 |
2018/03/22 | 856 | 3,104 | <issue_start>username_0: I am working on Aspect Based Sentiment Analysis.In this project we collected data from twitter. After collecting data we performed text cleaning methods and create a corpus. After that we used this corpus to find the aspects using noun\_phrases in python.It gives me the list of noun phrases. From this list i want to select only those aspects which contain only two words. How can i do that?
Here is my code and generated output:
```
from textblob import Word
comments = TextBlob(' '.join(corpus))
comments.noun_phrases
cleaned = list()
for phrase in comments.noun_phrases:
count = 0
for w in phrase.split():
# Count the number of small words and words without an English definition
if len(w) <= 2 or (not Word(w).definitions):
count += 1
# Only if the 'nonsensical' or short words DO NOT make up more than 40% (arbitrary) of the phrase add
# it to the cleaned list, effectively pruning the ones not added.
if count < len(phrase.split())*0.4:
cleaned.append(phrase)
print("After compactness pruning:\nFeature Size:")
print(cleaned)
```
Output:
['worth free food k retweet pleas', 'specif waiter job', 'red blend', 'old idea suddenli', 'global focus', 'local issu lot', 'africa food', 'food truck', 'space avail netbal woman footbal amp squash', 'week world cup', 'minor sign confess', 'french fri coupl day', 'great stuff ban plastic straw serv local produc ta xe x xa b differ food home food school home', 'stale croissant', 'thing time', 'great time saver bc', 'clean chop alreadi', 'fake news unit alreadi', 'sure food amp cosmet', 'long food', 'dog china american', 'trade china till', 'warm color', 'yellow orang', 'fast food restaur', 'yellow orang', 'emerg food parcel', 'junk food label parti size', 'share water check systemsthink', 'earth food', 'care chihuahua yappi requir food sleep', 'new cloth', 'dose moron', 'afraid poor rise peopl', 'friend feed', 'wrong shit', 'good guy', 'good bad guy', 'food pension livelihood', 'food fur babi fun stay']
From this we want to select only those noun phrases which contain only two words such as 'red blend','food truck','stale croissant',etc. How can i do that?<issue_comment>username_1: Do you control the database? If you do, you could try adding indexes on the address and postcode columns (anything in the join clause). That should probably speed up the query.
Edit: reread your question.
If the GROUP BY and ORDER BY clauses are slowing it down, I would try adding indexes on the columns referenced in those clauses.
Upvotes: 0 <issue_comment>username_2: You can do a few things:
* Create a new column so you don't need to use CONCAT `CONCAT(property.paon, ', ', property.street)` in the `GROUP BY` and the `JOIN` (this will speed it up a lot!)
* As username_1 says you need to create indexes at the right spot. (property postcode and the newly created column, and epc postcode and address)
* Remove the `HAVING` with `epc.TOTAL_FLOOR_AREA > 0` and add it to the `WHERE`
If you need more help, share en EXPLAIN of your query with us.
Upvotes: 1 |
2018/03/22 | 281 | 1,089 | <issue_start>username_0: As I asked in the title, can I be added as a team member to an Enterprise Apple developer team without having paid Apple developer account? I can't find anything on that.<issue_comment>username_1: Do you control the database? If you do, you could try adding indexes on the address and postcode columns (anything in the join clause). That should probably speed up the query.
Edit: reread your question.
If the GROUP BY and ORDER BY clauses are slowing it down, I would try adding indexes on the columns referenced in those clauses.
Upvotes: 0 <issue_comment>username_2: You can do a few things:
* Create a new column so you don't need to use CONCAT `CONCAT(property.paon, ', ', property.street)` in the `GROUP BY` and the `JOIN` (this will speed it up a lot!)
* As username_1 says you need to create indexes at the right spot. (property postcode and the newly created column, and epc postcode and address)
* Remove the `HAVING` with `epc.TOTAL_FLOOR_AREA > 0` and add it to the `WHERE`
If you need more help, share en EXPLAIN of your query with us.
Upvotes: 1 |
2018/03/22 | 4,948 | 14,596 | <issue_start>username_0: Hey guys I am trying to build a setup for my application but it does not seem to want to work.
I ran it with -l switch to get the install log, it looks like it is looking for a folder called "\_8D5342EDA5924BABBF19E5682639F820" but that folder doesn't exist in the development environment. Where is it getting this from?
here is the log file: <https://pastebin.com/Y8BRAFjA>
```
=== Logging started: 3/22/2018 12:26:49 ===
Action 12:26:49: INSTALL.
Action start 12:26:49: INSTALL.
Action 12:26:49: DIRCA_CheckFX.
Action start 12:26:49: DIRCA_CheckFX.
INFO : [03/22/2018 12:26:50:124] [CheckFX ]: Custom Action is starting...
INFO : [03/22/2018 12:26:50:124] [CheckFX ]: CoInitializeEx - COM initialization Apartment Threaded...
INFO : [03/22/2018 12:26:50:124] [CheckFX ]: MsiGetPropertyW - Determine size of property 'VSDFrameworkVersion'
INFO : [03/22/2018 12:26:50:124] [CheckFX ]: Allocating space...
INFO : [03/22/2018 12:26:50:124] [CheckFX ]: MsiGetPropertyW - Getting Property 'VSDFrameworkVersion'...
INFO : [03/22/2018 12:26:50:124] [CheckFX ]: Property 'VSDFrameworkVersion' retrieved with value 'v4.6.1'.
INFO : [03/22/2018 12:26:50:124] [CheckFX ]: MsiGetPropertyW - Determine size of property 'VSDFrameworkProfile'
INFO : [03/22/2018 12:26:50:124] [CheckFX ]: Property 'VSDFrameworkProfile' retrieved with value ''.
INFO : [03/22/2018 12:26:50:124] [CheckFX ]: Set VSDNETMSG with the FrameworkVersion.
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiGetPropertyW - Determine size of property 'VSDNETMSG'
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: Allocating space...
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiGetPropertyW - Getting Property 'VSDNETMSG'...
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: Property 'VSDNETMSG' retrieved with value 'This setup requires the .NET Framework version [1]. Please install the .NET Framework and run this setup again.'.
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiSetPropertyW - Setting Property Value...
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiSetPropertyW - Setting property 'VSDNETMSG' to 'This setup requires the .NET Framework version v4.6.1. Please install the .NET Framework and run this setup again.'.
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiGetPropertyW - Determine size of property 'VSDNETURLMSG'
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: Allocating space...
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiGetPropertyW - Getting Property 'VSDNETURLMSG'...
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: Property 'VSDNETURLMSG' retrieved with value 'This setup requires the .NET Framework version [1]. Please install the .NET Framework and run this setup again. The .NET Framework can be obtained from the web. Would you like to do this now?'.
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiSetPropertyW - Setting Property Value...
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiSetPropertyW - Setting property 'VSDNETURLMSG' to 'This setup requires the .NET Framework version v4.6.1. Please install the .NET Framework and run this setup again. The .NET Framework can be obtained from the web. Would you like to do this now?'.
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiSetPropertyW - Setting Property Value...
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiSetPropertyW - Setting property 'VSDFXAvailable' to 'TRUE'.
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: Writing config file with version: '4.0'...
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: Creating Config File...
DEBUG : [03/22/2018 12:26:50:134] [CheckFX ]: Calling MsiGetActiveDatabase...
DEBUG : [03/22/2018 12:26:50:134] [CheckFX ]: Calling MsiDatabaseOpenView...
DEBUG : [03/22/2018 12:26:50:134] [CheckFX ]: Calling MsiViewExecute...
DEBUG : [03/22/2018 12:26:50:134] [CheckFX ]: Calling MsiViewFetch...
DEBUG : [03/22/2018 12:26:50:134] [CheckFX ]: Calling MsiRecordDataSize...
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiSetPropertyW - Setting Property Value...
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: MsiSetPropertyW - Setting property 'VSDFxConfigFile' to 'C:\Users\EZEKIE~1\AppData\Local\Temp\CFG81DE.tmp'.
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: Custom Action succeeded.
INFO : [03/22/2018 12:26:50:134] [CheckFX ]: Custom Action completed with return code: '0'
Action ended 12:26:50: DIRCA_CheckFX. Return value 1.
Action 12:26:50: AppSearch. Searching for installed applications Property: [1], Signature: [2]
Action start 12:26:50: AppSearch.
Action ended 12:26:50: AppSearch. Return value 1.
Action 12:26:50: VSDCA_VsdLaunchConditions.
Action start 12:26:50: VSDCA_VsdLaunchConditions.
INFO : [03/22/2018 12:26:50:164] [VsdLaunchConditions ]: Custom Action is starting...
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: CoInitializeEx - COM initialization Apartment Threaded...
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: Enumerating table using SQL statement: 'SELECT * FROM `_VsdLaunchCondition`'
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: Calling MsiGetActiveDatabase...
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: MsiDatabaseOpenViewW - Prepare Database to view table...
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: TMsiViewExecute - Open Database view on table...
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: Checking a launch condition...
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: Getting the condition to evaluate...
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: MsiRecordGetStringW - Fetching value...
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: MsiRecordGetStringW - Getting value from column '1'...
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: Evaluating condition 'VSDFXAvailable'...
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: RESULT: Condition is true. Nothing more to do.
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: Custom Action succeeded.
INFO : [03/22/2018 12:26:50:174] [VsdLaunchConditions ]: Custom Action completed with return code: '0'
Action ended 12:26:50: VSDCA_VsdLaunchConditions. Return value 1.
Action 12:26:50: LaunchConditions. Evaluating launch conditions
Action start 12:26:50: LaunchConditions.
Action ended 12:26:50: LaunchConditions. Return value 1.
Action 12:26:50: CCPSearch. Searching for qualifying products
Action start 12:26:50: CCPSearch.
Action ended 12:26:50: CCPSearch. Return value 1.
Action 12:26:50: RMCCPSearch. Searching for qualifying products
Action start 12:26:50: RMCCPSearch.
Action ended 12:26:50: RMCCPSearch. Return value 0.
Action 12:26:50: ValidateProductID.
Action start 12:26:50: ValidateProductID.
Action ended 12:26:50: ValidateProductID. Return value 1.
Action 12:26:50: DIRCA_TARGETDIR.
Action start 12:26:50: DIRCA_TARGETDIR.
Action ended 12:26:50: DIRCA_TARGETDIR. Return value 1.
Action 12:26:50: CostInitialize. Computing space requirements
Action start 12:26:50: CostInitialize.
Action ended 12:26:50: CostInitialize. Return value 1.
Action 12:26:50: FileCost. Computing space requirements
Action start 12:26:50: FileCost.
Action ended 12:26:50: FileCost. Return value 1.
Action 12:26:50: IsolateComponents.
Action start 12:26:50: IsolateComponents.
Action ended 12:26:50: IsolateComponents. Return value 1.
Action 12:26:50: CostFinalize. Computing space requirements
Action start 12:26:50: CostFinalize.
DEBUG: Error 2727: The directory entry '_8D5342EDA5924BABBF19E5682639F820' does not exist in the Directory table
Info 2898.For VSI_MS_Sans_Serif13.0_0_0 textstyle, the system created a 'MS Sans Serif' font, in 0 character set, of 24 pixels height.
DEBUG: Error 2835: The control ErrorIcon was not found on dialog ErrorDialog
The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2835. The arguments are: ErrorIcon, ErrorDialog,
The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2727. The arguments are: _8D5342EDA5924BABBF19E5682639F820, ,
MSI (c) (18:78) [12:26:51:254]: Product: Setup1 -- The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2727. The arguments are: _8D5342EDA5924BABBF19E5682639F820, ,
Action ended 12:26:51: CostFinalize. Return value 3.
Action 12:26:51: FatalErrorForm.
Action start 12:26:51: FatalErrorForm.
DEBUG: Error 2826: Control Line1 on dialog FatalErrorForm extends beyond the boundaries of the dialog to the right by 4 pixels
The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2826. The arguments are: FatalErrorForm, Line1, to the right
DEBUG: Error 2826: Control Line2 on dialog FatalErrorForm extends beyond the boundaries of the dialog to the right by 4 pixels
The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2826. The arguments are: FatalErrorForm, Line2, to the right
DEBUG: Error 2826: Control BannerBmp on dialog FatalErrorForm extends beyond the boundaries of the dialog to the right by 4 pixels
The installer has encountered an unexpected error installing this package. This may indicate a problem with this package. The error code is 2826. The arguments are: FatalErrorForm, BannerBmp, to the right
Info 2898.For VsdDefaultUIFont.524F4245_5254_5341_4C45_534153783400 textstyle, the system created a 'MS Sans Serif' font, in 0 character set, of 24 pixels height.
Info 2898.For VSI_MS_Sans_Serif16.0_1_0 textstyle, the system created a 'MS Sans Serif' font, in 0 character set, of 29 pixels height.
Action 12:26:51: FatalErrorForm. Dialog created
Action ended 12:26:52: FatalErrorForm. Return value 1.
Action ended 12:26:52: INSTALL. Return value 3.
=== Logging stopped: 3/22/2018 12:26:52 ===
MSI (c) (18:78) [12:26:52:777]: Windows Installer installed the product. Product Name: Setup1. Product Version: 1.0.0. Product Language: 1033. Manufacturer: Default Company Name. Installation success or error status: 1603.
```<issue_comment>username_1: I had a similar problem. I create configuration projects (created as an "empty" project type in VS) and include the "Content" of that project in the installer. The binaries get included as a merge module.
In my configuration project, there was a subdirectory named "Policies" with several files in it. All of my "content" files in this directory showed relative path "Policies\*filename*" except for one... it showed "**p**olicies\*filename*". Note the lower-case "p". I removed and re-added the file, which then showed up with relative path "**P**olicies\*filename*" and this fixed the 2727 error.
Weird, but it worked.
Upvotes: 1 <issue_comment>username_2: I had the exactly same problem. In the end, I discovered that it was caused by satellite assemblies of my project's dependencies. No idea why :-(
The problem was solved after I added the following line to **.csproj** file of the project whose published files were used by the installer:
```
en
```
After this is added, there will be no language-specific folders in **bin** with names like **fr-FR**, **zh-Hans** etc. and setup project will generate MSI that no longer throws 2727 error.
It would be great if anyone knows explanation of this and what would you do if you wanted satellite assemblies to be part of the installer.
Upvotes: 2 <issue_comment>username_3: I've just had the same issue so - for the future generations - I will leave the resolution here:
Probably the file definitions that come from VS are wrong. So first of all you have check the output files by going to:
1. Properties of the output group
2. Output field and clicking "..."
3. It will show you the result, files that will be copied
4. So you have to validate if the list is OK and there's no duple slash like `\\`
[![VDProj editing the output](https://i.stack.imgur.com/qCyz4.png)](https://i.stack.imgur.com/qCyz4.png)
That was the issue for me. I edited the visual studio project to force it to copy web files from parent directory to the project by:
```
PreserveNewest
wwwroot%(RecursiveDir)\%(Filename)%(Extension)
```
And there was `\` that was resolved to `\\` and caused this issue.
Upvotes: 1 <issue_comment>username_4: I was also having the exact same error, but a slightly different cause than the other answers. I have a setup project which just has my main project's publish items. [This is what I saw when I inspected my publish items output in the setup project](https://i.stack.imgur.com/JR6fR.png).
The issue was that the installer was trying to copy all files related to one nuget package (CefGlue.Common) to a path starting with `\`. I looked at the package's repo and found that it was trying to copy these files to a path like `$(OutputDirectory)\`. My fix was just to define the property in my main project's .csproj like so: `.`
After making that change, the publish item output now looked like [this](https://i.stack.imgur.com/VyUq3.png).
Upvotes: 0 |
2018/03/22 | 1,638 | 4,633 | <issue_start>username_0: I'm trying to host an Orchard CMS web application. After following the [build and deploy](http://docs.orchardproject.net/en/latest/Documentation/Building-and-deploying-Orchard-from-a-source-code-drop/) instructions and filling out the setup page the application hangs on the message 'This tenant is currently initializing. Please try again later.'.
Two messages are being logged, but I don't have a clue where to start to investigate this problem.
**orchard-recipes-2018.03.22.log**
```
2018-03-22 17:45:38,727 Orchard.Recipes.Services.RecipeManager - Default - INFO Executing recipe 'Default'. [ExecutionId=cd0f58a966e04b4386bf9c03ed507f99]
2018-03-22 17:45:38,773 Orchard.Recipes.Services.RecipeStepQueue - Default - INFO Enqueuing recipe step 'Feature'. [ExecutionId=cd0f58a966e04b4386bf9c03ed507f99]
2018-03-22 17:45:38,825 Orchard.Recipes.Services.RecipeStepQueue - Default - INFO Enqueuing recipe step 'ContentDefinition'. [ExecutionId=cd0f58a966e04b4386bf9c03ed507f99]
2018-03-22 17:45:38,843 Orchard.Recipes.Services.RecipeStepQueue - Default - INFO Enqueuing recipe step 'Settings'. [ExecutionId=cd0f58a966e04b4386bf9c03ed507f99]
2018-03-22 17:45:38,857 Orchard.Recipes.Services.RecipeStepQueue - Default - INFO Enqueuing recipe step 'Migration'. [ExecutionId=cd0f58a966e04b4386bf9c03ed507f99]
2018-03-22 17:45:38,873 Orchard.Recipes.Services.RecipeStepQueue - Default - INFO Enqueuing recipe step 'Command'. [ExecutionId=cd0f58a966e04b4386bf9c03ed507f99]
2018-03-22 17:45:38,907 Orchard.Recipes.Services.RecipeStepQueue - Default - INFO Enqueuing recipe step 'ActivateShell'. [ExecutionId=(null)]
2018-03-22 17:45:39,268 Orchard.Recipes.Services.RecipeStepQueue - Default - INFO Dequeuing recipe steps. [ExecutionId=cd0f58a966e04b4386bf9c03ed507f99]
2018-03-22 17:45:39,273 Orchard.Recipes.Services.RecipeStepQueue - Default - INFO Dequeuing recipe step 'Feature'. [ExecutionId=cd0f58a966e04b4386bf9c03ed507f99]
```
**orchard-error-2018.03.22.log**
```
2018-03-22 17:45:39,574 [6] Orchard.Exceptions.DefaultExceptionPolicy - Default - An unexpected exception was caught [http://.../]
Orchard.OrchardCoreException: Unable to make room for file "D:\Inetpub\vhosts\...\httpdocs\App_Data\RecipeQueue\cd0f58a966e04b4386bf9c03ed507f99\0" in "App_Data" folder ---> System.UnauthorizedAccessException: Access to the path 'D:\Inetpub\vhosts\...\httpdocs\App_Data\RecipeQueue\cd0f58a966e04b4386bf9c03ed507f99\0' is denied.
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.File.InternalDelete(String path, Boolean checkHost)
at System.IO.File.Delete(String path)
at Orchard.FileSystems.AppData.AppDataFolder.MakeDestinationFileNameAvailable(String destinationFileName) in C:\...\Orchard\src\Orchard\FileSystems\AppData\AppDataFolder.cs:line 80
--- End of inner exception stack trace ---
at Orchard.FileSystems.AppData.AppDataFolder.MakeDestinationFileNameAvailable(String destinationFileName) in C:\...\Orchard\src\Orchard\FileSystems\AppData\AppDataFolder.cs:line 86
at Orchard.FileSystems.AppData.AppDataFolder.DeleteFile(String path) in C:\...\Orchard\src\Orchard\FileSystems\AppData\AppDataFolder.cs:line 150
at Orchard.Recipes.Services.RecipeStepQueue.Dequeue(String executionId)
at Orchard.Recipes.Services.RecipeStepExecutor.ExecuteNextStep(String executionId)
at Orchard.Recipes.Services.RecipeScheduler.ExecuteWork(String executionId)
at Orchard.Events.DelegateHelper.<>c__DisplayClass14_0`2.b\_\_0(Object target, Object[] p) in C:\...\Orchard\src\Orchard\Events\DelegateHelper.cs:line 116
at Orchard.Events.DefaultOrchardEventBus.TryInvokeMethod(IEventHandler eventHandler, Type interfaceType, String messageName, String interfaceName, String methodName, IDictionary`2 arguments, IEnumerable& returnValue) in C:\...\Orchard\src\Orchard\Events\DefaultOrchardEventBus.cs:line 83
at Orchard.Events.DefaultOrchardEventBus.TryNotifyHandler(IEventHandler eventHandler, String messageName, String interfaceName, String methodName, IDictionary`2 eventData, IEnumerable& returnValue) in C:\...\Orchard\src\Orchard\Events\DefaultOrchardEventBus.cs:line 53
```<issue_comment>username_1: Turns out, I had to give the App\_Data folder "Full-control" permissions. And not only "Write" as it says in the [docs](http://docs.orchardproject.net/en/latest/Documentation/Building-and-deploying-Orchard-from-a-source-code-drop/)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Change to "modify". It's sufficient. Look also [here](http://docs.orchardproject.net/en/latest/Documentation/Manually-installing-Orchard-zip-file/)
Upvotes: 0 |
2018/03/22 | 292 | 1,118 | <issue_start>username_0: I have a small BLE beacon which is configured to send iBeacon packets every 1000ms.
In my usecase i want to detect the signal on multiple recievers every time it is sent. However the detection is not reliable no matter which receiving device and software i use (phone, computer, raspberry). The signal is sometimes detected after 2 seconds, another time 5, 6 or whatever. It seems like there is no pattern behind it.
Also it seems that sometimes the signal is received on one receiver but not on the others while definetly being in range! Also the area i am testing in is not "problematic".
What could be the problem?<issue_comment>username_1: Turns out, I had to give the App\_Data folder "Full-control" permissions. And not only "Write" as it says in the [docs](http://docs.orchardproject.net/en/latest/Documentation/Building-and-deploying-Orchard-from-a-source-code-drop/)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Change to "modify". It's sufficient. Look also [here](http://docs.orchardproject.net/en/latest/Documentation/Manually-installing-Orchard-zip-file/)
Upvotes: 0 |
2018/03/22 | 865 | 3,141 | <issue_start>username_0: We have a fairly simple function called `alert` which basically creates an alert card (HTML element) anytime it is triggered. For reference we are using [Eel](https://github.com/ChrisKnott/Eel) to pass variables from Python and run this in a chrome wrapper.
```
eel.expose(alert);
function alert(serial, time\_key, card\_color, screen\_msg, ping) {
//clone card\_template for each new alert
var clone = $("#card\_template").clone();
clone.attr("id", serial);
clone.find("#message-card").attr("id", "message-card-" + serial + "-" + time\_key);
clone.find("#python-data").attr("id", "python-data-" + serial + "-" + time\_key);
//append clone on the end
$("#message-zone").prepend(clone);
document.getElementById("message-card-" + serial + "-" + time\_key).classList.remove('bg-info');
document.getElementById("message-card-" + serial + "-" + time\_key).className += card\_color;
document.getElementById("python-data-" + serial + "-" + time\_key).innerHTML = screen\_msg;
var button\_template = '<button type="button" class="btn btn-sm btn-danger">Clear</button>';
var selector\_id = 'python-data-' + serial + '-' + time\_key;
// $('#python-data-'+ serial + '-' + time\_key).append(button\_template);
$('#'+ selector\_id).append(button\_template);
$('#python-data').append(button\_template);
if (ping === true)
document.getElementById('alert').play();
}
```
It clones and alters this element based on the type of alert that is received.
```
No messages
```
So this is where we are losing it. We want to append a HTML block to the cards after they are cloned and given unique ids. We have created a variable `button_template` that contains that code block. We can insert this code block easily into an element with a hardcoded id.
For example:
```
$('#python-data').append(button_template);
```
Will append the code block to the `#python-data` div in the original (clone source) card.
But we can't seem to get it to work when our selector is assembled from variables (necessary to address the cloned alert cards with unique ids).
Neither:
```
var selector_id = 'python-data-' + serial + '-' + time_key;
$('#'+ selector_id).append(button_template);
```
or
```
$('#'+ 'python-data-' + serial + '-' + time_key).append(button_template);
```
Works on the newly cloned cards.
**TLDR** Everything else on this function works. It clones our starting element, gives it, and its children, a unique id (assembled from variable), removes a class, adds a class and writes a message to the innermost div. This all works. All we need help with is appending a HTML block to a div with a unique, variable-based id.<issue_comment>username_1: Turns out, I had to give the App\_Data folder "Full-control" permissions. And not only "Write" as it says in the [docs](http://docs.orchardproject.net/en/latest/Documentation/Building-and-deploying-Orchard-from-a-source-code-drop/)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Change to "modify". It's sufficient. Look also [here](http://docs.orchardproject.net/en/latest/Documentation/Manually-installing-Orchard-zip-file/)
Upvotes: 0 |
2018/03/22 | 531 | 1,969 | <issue_start>username_0: Upon trying to use a jar on the local linux machine, I am getting the following error:
library initialization failed - unable to allocate file descriptor table - out of memory
The machine has 32G RAM
I can provide additional information, if needed.
Any help would be appreciated.<issue_comment>username_1: In recent versions of Linux default limit for the number of open files has been increased significantly. Java 8 does the wrong thing of trying to allocate memory upfront for this number of file descriptors (see <https://bugs.openjdk.java.net/browse/JDK-8150460>). Previously this worked, when the default limit was much lower, but now it tries to allocate too much and fails. Workaround for this is to set a lower limit of number of open files (or use newer java):
```
$ mvn
library initialization failed - unable to allocate file descriptor table - out of memoryAborted
$ ulimit -n 10000
$ mvn
[INFO] Scanning for projects...
...
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Had this happen to me on some Java applications and curiously all electron-based apps (such as Spotify) after upgrading my Manjaro Linux about a week ago (today is November 19, 2019).
What fixed it was this command (as root, sudo didn't do it:
```
echo >/etc/security/limits.d/systemd.conf "* hard nofile 1048576"
```
Then **reboot**
Hope this helps someone.
Upvotes: 1 <issue_comment>username_3: None of the other fixes I found online worked for me, however I noticed that the [bug responsible for this defect](https://bugs.openjdk.java.net/browse/JDK-8150460) is in Java 9, and has been resolved since.
I'm on ArchLinux so I notice that when I tried to start the `elasticsearch.service` in `journalctl -xe` it showed that for some reason JRE8 was running it, and indeed `archlinux-java status` showed that Java 8 was the default. Setting to Java 11 fixed the problem for me:
```
# archlinux-java set java-11-openjdk
```
Upvotes: 1 |
2018/03/22 | 979 | 2,438 | <issue_start>username_0: I'm moving from floats to flexbox I have some issues.
I have the following code:
```css
.card {
display: flex;
flex-direction: row;
border: 1px solid green;
margin-bottom: 0.8rem;
}
.card .image {
border-right: 1px solid #E3E3E3;
padding: 1.2rem;
vertical-align: middle;
}
.card .body {
padding: 1.2rem;
display: flex;
flex-direction: column;
}
.card .logo {
margin-bottom: 0.8rem;
}
title { padding: 0 0 0.8rem 0;}
text { padding: 0}
```
```html
![](../img/product_image_sample1.png)
![](logo_sample.png)
```
I want the `.image` to be vertical align in the middle, and to have padding bottom between `logo, title and text`;
I need to work with IE10, IE11 also.<issue_comment>username_1: I think you should use
```
.card {
display: flex;
flex-direction: row;
border: 1px solid green;
margin-bottom: 0.8rem;
justify-content: center
align-items: center
}
```
Also I found these videos very helpful
<https://www.youtube.com/watch?v=siKKg8Y_tQY>
<https://www.youtube.com/watch?v=RdlEEfx912M>
Upvotes: 0 <issue_comment>username_2: Issue 1: Flexbox can not implemented on I.E 11 and below so you would have to use floats as a fallback. (If this is workeable for you add a comment below or ask a new question)
On centering check the css I edited. Add flexbox to item and use `align-items:center` to make it centered vertically.
In your markup you forgot to add the class for logo. That's why it doesn't work.
```css
.card {
display: flex;
flex-direction: row;
border: 1px solid green;
margin-bottom: 0.8rem;
}
.card .image {
border-right: 1px solid #E3E3E3;
padding: 1.2rem;
/*Make the item a flexbox item and center the item vertically*/
display: flex;
align-items: center;
}
.card .body {
padding: 1.2rem;
display: flex;
flex-direction: column;
}
.card .logo {
padding: 0 0 0.8rem 0;
}
title { padding: 0 0 0.8rem 0;}
text { padding: 0}
```
```html
![](https://images.unsplash.com/photo-1492257027549-2617847dc868?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=400&fit=max&ixid=eyJhcHBfaWQiOjE0NTg5fQ&s=494b1d11506d9e6babb320454e8a5798)
![](https://images.unsplash.com/photo-1492257027549-2617847dc868?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=400&fit=max&ixid=eyJhcHBfaWQiOjE0NTg5fQ&s=494b1d11506d9e6babb320454e8a5798)
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 605 | 2,265 | <issue_start>username_0: So I got an array with a ton of fields that are bound to
Now I also have a button in here. What I want to do is onclick I want to go to a function onSubmit which also takes the value `session.gameId` and puts it in asyncstorage.
```
{this.state.sessions.map((session, index) => {
return (
{session.title}
Organisator: {session.organizer}
Hoofd Thema: {session.mainTheme}
Aantal Deelnemers: {session.numberParticipants}
Game Id: {session.gameId}
Selecteer
```
But I have no idea how my onsubmit can handle both an event to change page after storing the value into asyncstorage
Please help<issue_comment>username_1: Not sure if I'm understanding your question correctly, but if you're trying to store the value of session.gameId into AsyncStorage and then change the page, your function may look something like this:
```
changePage() {// functionality to navigate to another page}
/*
* have onSubmit be an async function so you use the 'await' keyword
* await forces an asynchronous line of code to wait until the operations is done before moving forward
*/
async onSubmit(gameId) {
try {
await AsyncStorage.setItem('@store:key', gameId)
} catch(error) {
// handle error
}
// however you are changing page, handle it here
// this code wont run until gameId has been stored in async storage
this.changePage()
}
```
You would also need to pass the gameId to the function to actually call it now:
`onPress={() => this.onSubmit(session.gameId)}`
Take a look at how [async](http://%20https://tutorialzine.com/2017/07/javascript-async-await-explained) functions can make your life easier :)
Upvotes: 1 <issue_comment>username_2: I'm answering this assuming that when you say your `onSubmit` triggers "an event to change page", you mean that it navigates to another screen.
If so, you seem to be asking for something like this:
```js
onSubmit = async gameId => {
try {
await AsyncStorage.setItem('gameId', gameId)
// Success: navigate away here
this.props.goToMyOtherScreen()
} catch {
// Handle error
}
}
```
To get `gameId` into your submit handler, you could use an inline anonymous function:
```js
this.onSubmit(session.gameId)}>
```
Upvotes: 0 |
2018/03/22 | 1,526 | 4,739 | <issue_start>username_0: I am looking at setting up some unit tests for a NodeJS project, but I am wondering how to mock up my usage of AWS services. I am using a wide variety: SNS, SQS, DynamoDB, S3, ECS, EC2, Autoscaling, etc. Does anybody have any good leads on how I might mock these up?<issue_comment>username_1: I just spent hours trying to get AWS SQS mocking working, *without* resorting to the `aws-sdk-mock` requirement of importing `aws-sdk` clients inside a function.
The mocking for `AWS.DynamoDB.DocumentClient` was pretty easy, but the `AWS.SQS` mocking had me stumped until I came across the suggestion to use [rewire](https://github.com/jhnns/rewire).
My lambda moves bad messages to a SQS FailQueue (rather than letting the Lambda fail and return the message to the regular Queue for retries, and then DeadLetterQueue after maxRetries). The unit tests needed to mock the following SQS methods:
* `SQS.getQueueUrl`
* `SQS.sendMessage`
* `SQS.deleteMessage`
I'll try to keep this example code as concise as I can while still including *all* the relevant parts:
Snippet of my AWS Lambda (index.js):
```
const AWS = require('aws-sdk');
AWS.config.update({region:'eu-west-1'});
const docClient = new AWS.DynamoDB.DocumentClient();
const sqs = new AWS.SQS({ apiVersion: '2012-11-05' });
// ...snip
```
Abridged Lambda event records (event.json)
```
{
"valid": {
"Records": [{
"messageId": "c292410d-3b27-49ae-8e1f-0eb155f0710b",
"receiptHandle": "AQEBz5JUoLYsn4dstTAxP7/IF9+T1S994n3FLkMvMmAh1Ut/Elpc0tbNZSaCPYDvP+mBBecVWmAM88SgW7iI<KEY>
"body": "{ \"key1\": \"value 1\", \"key2\": \"value 2\", \"key3\": \"value 3\", \"key4\": \"value 4\", \"key5\": \"value 5\" }",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "1536763724607",
"SenderId": "AROAJAAXYIAN46PWMV46S:<EMAIL>",
"ApproximateFirstReceiveTimestamp": "1536763724618"
},
"messageAttributes": {},
"md5OfBody": "e5b16f3a468e6547785a3454cfb33293",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:eu-west-1:123456789012:sqs-queue-name",
"awsRegion": "eu-west-1"
}]
}
}
```
Abridged unit test file (test/index.test.js):
```
const AWS = require('aws-sdk');
const expect = require('chai').expect;
const LamdbaTester = require('lambda-tester');
const rewire = require('rewire');
const sinon = require('sinon');
const event = require('./event');
const lambda = rewire('../index');
let sinonSandbox;
function mockGoodSqsMove() {
const promiseStubSqs = sinonSandbox.stub().resolves({});
const sqsMock = {
getQueueUrl: () => ({ promise: sinonSandbox.stub().resolves({ QueueUrl: 'queue-url' }) }),
sendMessage: () => ({ promise: promiseStubSqs }),
deleteMessage: () => ({ promise: promiseStubSqs })
}
lambda.__set__('sqs', sqsMock);
}
describe('handler', function () {
beforeEach(() => {
sinonSandbox = sinon.createSandbox();
});
afterEach(() => {
sinonSandbox.restore();
});
describe('when SQS message is in dedupe cache', function () {
beforeEach(() => {
// mock SQS
mockGoodSqsMove();
// mock DynamoDBClient
const promiseStub = sinonSandbox.stub().resolves({'Item': 'something'});
sinonSandbox.stub(AWS.DynamoDB.DocumentClient.prototype, 'get').returns({ promise: promiseStub });
});
it('should return an error for a duplicate message', function () {
return LamdbaTester(lambda.handler)
.event(event.valid)
.expectReject((err, additional) => {
expect(err).to.have.property('message', 'Duplicate message: {"Item":"something"}');
});
});
});
});
```
Upvotes: 1 <issue_comment>username_2: Take a look at [LocalStack](https://github.com/localstack/localstack). It provides an easy-to-use test/mocking framework for developing AWS-related applications by spinnin up the AWS-compatible APIs on your local machine or in Docker. It supports two dozen of AWS APIs and SQS is among them. It is really a great tool for functional testing without using a separate environment in AWS for that.
Upvotes: 1 [selected_answer] |
2018/03/22 | 746 | 2,676 | <issue_start>username_0: In the below code, I'm using vert.x to create a route
```
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.ext.web.Router;
import java.util.function.Consumer;
public class VerticleMain extends AbstractVerticle {
@Override
public void start() throws Exception {
Router router = Router.router(vertx);
router.route().handler(routingContext -> {
routingContext.response()
.putHeader("content-type","text/html;charset=UTF-8")
.end("people");
});
vertx.createHttpServer().requestHandler(router::accept).listen(8181);
}
public static void deployVertx() {
String verticleId = VerticleMain.class.getName();
VertxOptions options = new VertxOptions();
Consumer runner = vertxStart -> {
vertxStart.deployVerticle(verticleId);
};
Vertx vertx = Vertx.vertx(options);
runner.accept(vertx);
}
public static void main(String[] args) {
VerticleMain.deployVertx();
}
}
```
However, when i tried executing the code again, the log is
```
java.net.BindException: Address already in use
```
If this port is used, I want to stop the process which occupied the port, and then execute the code. Is there any way to accomplish this goal?
I hope you can provide a simply example<issue_comment>username_1: Your code is absolutely fine. please kill all the java process or restart your machine and try again. it should work fine. only one import was missing and i added that.
```
package com.americanexpress.digitalpayments.pipe;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.ext.web.Router;
import java.util.function.Consumer;
public class VerticleMain extends AbstractVerticle {
public static void deployVertx() {
String verticleId = VerticleMain.class.getName();
VertxOptions options = new VertxOptions();
Consumer runner = vertxStart -> {
vertxStart.deployVerticle(verticleId);
};
Vertx vertx = Vertx.vertx(options);
runner.accept(vertx);
}
public static void main(String[] args) {
VerticleMain.deployVertx();
}
@Override
public void start() throws Exception {
Router router = Router.router(vertx);
router.route().handler(routingContext -> {
routingContext.response()
.putHeader("content-type", "text/html;charset=UTF-8")
.end("people");
});
vertx.createHttpServer().requestHandler(router::accept).listen(8181);
}
```
}
Upvotes: 1 <issue_comment>username_2: you need to kill the port:
for ubuntu:
```
sudo kill $(sudo lsof -t -i:8081)
```
Upvotes: 0 |
2018/03/22 | 648 | 2,265 | <issue_start>username_0: I want to fetch data from database without reloading page. I am working with laravel. I have fetched those data using jquery.
Here is my jquery code.
```
function loadData() {
var id=$('#projectId').val();
$.ajax({
type: 'GET',
url: '/teammessage/'+id,
success: function(value){
console.log(value);
$(value.success).each(function(e,k){
$('#test').append(value.success[e].team_message);
});
}
});
}
```
Now I want to show that data without page reload if any new data inserted into the database.
I called this `loadData()` function in `setInterval`function like this one.
```
window.setInterval(function(){
loadData();
}, 1000);
```
But this `setInterval` function keep previous data and load data again. I want to show just updated data. Like, if in my view have `A`, `B` and new `C` is inserted into database by other user. It should show `A`, `B`, `C` without reloading page.<issue_comment>username_1: Your code is absolutely fine. please kill all the java process or restart your machine and try again. it should work fine. only one import was missing and i added that.
```
package com.americanexpress.digitalpayments.pipe;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.ext.web.Router;
import java.util.function.Consumer;
public class VerticleMain extends AbstractVerticle {
public static void deployVertx() {
String verticleId = VerticleMain.class.getName();
VertxOptions options = new VertxOptions();
Consumer runner = vertxStart -> {
vertxStart.deployVerticle(verticleId);
};
Vertx vertx = Vertx.vertx(options);
runner.accept(vertx);
}
public static void main(String[] args) {
VerticleMain.deployVertx();
}
@Override
public void start() throws Exception {
Router router = Router.router(vertx);
router.route().handler(routingContext -> {
routingContext.response()
.putHeader("content-type", "text/html;charset=UTF-8")
.end("people");
});
vertx.createHttpServer().requestHandler(router::accept).listen(8181);
}
```
}
Upvotes: 1 <issue_comment>username_2: you need to kill the port:
for ubuntu:
```
sudo kill $(sudo lsof -t -i:8081)
```
Upvotes: 0 |
2018/03/22 | 434 | 1,468 | <issue_start>username_0: >
> can we *modify* a private variable through **child class**.
> We can use it by set and get methods but how can we modify them in child class.
>
>
><issue_comment>username_1: Your code is absolutely fine. please kill all the java process or restart your machine and try again. it should work fine. only one import was missing and i added that.
```
package com.americanexpress.digitalpayments.pipe;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Vertx;
import io.vertx.core.VertxOptions;
import io.vertx.ext.web.Router;
import java.util.function.Consumer;
public class VerticleMain extends AbstractVerticle {
public static void deployVertx() {
String verticleId = VerticleMain.class.getName();
VertxOptions options = new VertxOptions();
Consumer runner = vertxStart -> {
vertxStart.deployVerticle(verticleId);
};
Vertx vertx = Vertx.vertx(options);
runner.accept(vertx);
}
public static void main(String[] args) {
VerticleMain.deployVertx();
}
@Override
public void start() throws Exception {
Router router = Router.router(vertx);
router.route().handler(routingContext -> {
routingContext.response()
.putHeader("content-type", "text/html;charset=UTF-8")
.end("people");
});
vertx.createHttpServer().requestHandler(router::accept).listen(8181);
}
```
}
Upvotes: 1 <issue_comment>username_2: you need to kill the port:
for ubuntu:
```
sudo kill $(sudo lsof -t -i:8081)
```
Upvotes: 0 |
2018/03/22 | 679 | 2,003 | <issue_start>username_0: I need to access `TR` elements through `firstChild` or `lastChild` object, however using `ID` or `Name` property doesn't seem to be working:
<https://jsfiddle.net/7594v0kg/13/>
```
| | |
| --- | --- |
| 123 | 456 |
| 789 | 987 |
```
Javascript code:
```
var elt=document.getElementById("tbl")
var out=document.getElementById("out")
out.innerHTML="name="+ +elt.firstChild.name+",id="+elt.firstChild.id
```
the output is:
```
123 456
789 987
name=NaN,id=undefined
```
How should I modify my code to access the elements by `id` or `name`? This is needed to implement a deleting process but only children labeled with some values should remain.
JSFildle is provided
<https://jsfiddle.net/7594v0kg/13/><issue_comment>username_1: It is not actually valid to have `|
| |`. The browser is silently correcting your invalid HTML to `|
| |`. You will notice this if you try `elt.children[0].nodeName`.
Note also that the whitespace between elements in your code counts as the `firstChild` - this is why I used `children[0]` above.
Upvotes: 2 <issue_comment>username_2: Hello friend please try to do with jquery plugin.
```
out.innerHTML="name="+$(elt).find('tr:first td:first').text()+",id="+$(elt).find('tr:first td:last').text();
```
This is the solution I found. Try it once in your given fiddle.
```js
var elt=document.getElementById("tbl")
var out=document.getElementById("out")
out.innerHTML="name="+$(elt).find('tr:first td:first').text()+",id="+$(elt).find('tr:first td:last').text();
```
```html
| | |
| --- | --- |
| 123 | 456 |
| 789 | 987 |
```
Upvotes: 0 <issue_comment>username_3: ```js
var elt=document.getElementById("tbl")
var out=document.getElementById("out")
out.innerHTML="name="+elt.firstElementChild.firstElementChild.getAttribute("name")+
",id="+elt.firstElementChild.firstElementChild.getAttribute("id")
```
```html
| | |
| --- | --- |
| 123 | 456 |
| 789 | 987 |
```
Hope this will helps you..:)
Upvotes: 3 [selected_answer] |
2018/03/22 | 925 | 3,155 | <issue_start>username_0: This is the html for the form to take input and on clicking submit all the input will be displays in the popup window. Currently i am able to only display text in the popup. Not able to show text of selected checkbox and dropdown list. Only number for dropdown and ON for checkbox if its selected or not.
```
Registration Form
-----------------
Name
Address:
Gender:
Select
Male
Female
Other
Category:
Select
Open
OBC
SC/ST
Other
State:
Select
Pune
Chennai
Bangalore
District:
Select
A
A
A
B
B
B
C
C
C
Education Qualification
BE
MCA
Newsletter
[Submit](#popup_dialog)
Confirmation
------------
Name:
Address:
Gender:
Category:
State:
District:
Qualification:
[OK](#)
[Back](#)
```
This is the jQuery i have used for displaying the form input on the popup.
Also tell me if there is better way to write the entire jQuery.
```
< script >
$(document).ready(function() {
$("#select2").change(function() {
if ($(this).data('options') === undefined) {
$(this).data('options', $('#select3 option').clone());
}
var id = $(this).val();
var options = $(this).data('options').filter('[value=' + id + ']');
$('#select3').html(options);
});
});
function showMessage() {
var name = $("#fname").val();
var address = $("#textarea").val();
var gender = $("#select").val();
var category = $("#select1 option:selected").html();
var state = $("#select2").val();
var district = $("#select3").val();
var qualification = $("#checkbox1").val();
$("#display_fname").html(name);
$("#display_textarea").html(address);
$("#display_select").html(gender);
$("#display_select1").html(category);
$("#display_select2").html(state);
$("#display_select3").html(district);
$("#display_checkbox").html(qualification);
}
```<issue_comment>username_1: It is not actually valid to have `|
| |`. The browser is silently correcting your invalid HTML to `|
| |`. You will notice this if you try `elt.children[0].nodeName`.
Note also that the whitespace between elements in your code counts as the `firstChild` - this is why I used `children[0]` above.
Upvotes: 2 <issue_comment>username_2: Hello friend please try to do with jquery plugin.
```
out.innerHTML="name="+$(elt).find('tr:first td:first').text()+",id="+$(elt).find('tr:first td:last').text();
```
This is the solution I found. Try it once in your given fiddle.
```js
var elt=document.getElementById("tbl")
var out=document.getElementById("out")
out.innerHTML="name="+$(elt).find('tr:first td:first').text()+",id="+$(elt).find('tr:first td:last').text();
```
```html
| | |
| --- | --- |
| 123 | 456 |
| 789 | 987 |
```
Upvotes: 0 <issue_comment>username_3: ```js
var elt=document.getElementById("tbl")
var out=document.getElementById("out")
out.innerHTML="name="+elt.firstElementChild.firstElementChild.getAttribute("name")+
",id="+elt.firstElementChild.firstElementChild.getAttribute("id")
```
```html
| | |
| --- | --- |
| 123 | 456 |
| 789 | 987 |
```
Hope this will helps you..:)
Upvotes: 3 [selected_answer] |
2018/03/22 | 557 | 1,928 | <issue_start>username_0: I am using `materialize.css` and have been trying to programmatically close a modal and then immediately open another.
This is the code I am currently using to `open/close`:
```
$('.modal').modal('close', "#modal1");
$('.modal').modal('open', "#modal2");
```
What actually happens is that when I call these functions, `#modal1` closes successfully, but its backdrop remains there as it is.
At exactly the same time, `#modal2` opens up and then *immediately* (in about 200 ms or so) closes back. Again, the backdrop stays unchanged.
Can I please get help to solve this?
The documentation calls `jQuery` *old*. So, am I using some depreacated method, and should I use a vanilla JavaScript equivalent for the same? What, if yes?<issue_comment>username_1: You can try:
`$('#modal1').modal('close');`
`$('#modal2').modal('open');`
Upvotes: 0 <issue_comment>username_2: For Materialize.js version v0.100 and above.
You can call a onClick function() to close/open multiple modals like below:
```
[Button](#!)
```
In below script when OpenCloseModal() function is executed here, it first closes the Modal (#modal1) and then opens the Modal (#modal2)
```
function closeOpenModal() {
$('#modal1').modal('close');
$('#modal2').modal('open');
}
```
Upvotes: 2 <issue_comment>username_3: ```
function close_modal() {
var elem = document.getElementById("element-id");
var instance = M.Modal.getInstance(elem);
instance.close();
}
```
Upvotes: -1 <issue_comment>username_4: No-JQuery way. First you should get an instance of modal that you intend to close. If e.g. on init you write:
```
var elems = document.querySelectorAll(".modal");
M.Modal.init(elems);
```
so in `elems` we have *NodeList* of all modals, for example we have three modals, to close second one you can say:
```
elems[1].M_Modal.close();
```
to open third:
```
elems[2].M_Modal.open();
```
Upvotes: 0 |
2018/03/22 | 992 | 1,879 | <issue_start>username_0: If I had a function like this:
```
foo <- function(var) {
if(length(var) > 5) stop("can't be greater than 5")
data.frame(var = var)
}
```
Where this worked:
```
df <- 1:20
foo(var = df[1:5])
```
But this didn't:
```
foo(var = df)
```
The desired output is:
```
var
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
11 11
12 12
13 13
14 14
15 15
16 16
17 17
18 18
19 19
20 20
```
If I know that I can only run this function in chunk of 5 rows, what would be the best approach if I wanted to evaluate all 20 rows? Can I use `purrr::map()` for this? Assume that the 5 row constraint is rigid.
Thanks in advance.<issue_comment>username_1: We `split` `df` in chunks of 5 each then use `purrr::map_dfr` to apply `foo` function on them then `bind` everything together by rows
```
library(tidyverse)
foo <- function(var) {
if(length(var) > 5) stop("can't be greater than 5")
data.frame(var = var)
}
df <- 1:20
df_split <- split(df, (seq(length(df))-1) %/% 5)
df_split
map_dfr(df_split, ~ foo(.x))
var
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
11 11
12 12
13 13
14 14
15 15
16 16
17 17
18 18
19 19
20 20
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use `dplyr::group_by` or `tapply` :
```
data.frame(df) %>%
mutate(grp = (row_number()-1) %/% 5) %>%
group_by(grp) %>%
mutate(var = foo(df)$var) %>%
ungroup %>%
select(var)
# # A tibble: 20 x 1
# var
#
# 1 1
# 2 2
# 3 3
# 4 4
# 5 5
# 6 6
# 7 7
# 8 8
# 9 9
# 10 10
# 11 11
# 12 12
# 13 13
# 14 14
# 15 15
# 16 16
# 17 17
# 18 18
# 19 19
# 20 20
data.frame(var=unlist(tapply(df,(df-1) %/% 5,foo)))
# var
# 01 1
# 02 2
# 03 3
# 04 4
# 05 5
# 11 6
# 12 7
# 13 8
# 14 9
# 15 10
# 21 11
# 22 12
# 23 13
# 24 14
# 25 15
# 31 16
# 32 17
# 33 18
# 34 19
# 35 20
```
Upvotes: 0 |
2018/03/22 | 547 | 1,956 | <issue_start>username_0: I want to render 7 buttons in a row with equal width to fill up all the available space horizontally.
```
render() {
return (
}
keyExtractor={item => item.getDate().toString()}
horizontal={true}
style={styles.list}
/>
);
}
const styles = StyleSheet.create({
list: {
display: 'flex'
},
button: {
flex: 1
}
});
```
They are inside a flatlist, wrapped in a view because I cannot style buttons directly.
In regular flexbox in an HTML page this approach works.
This what I get in React Native:
[![enter image description here](https://i.stack.imgur.com/YMjFb.png)](https://i.stack.imgur.com/YMjFb.png)
Maybe there are some flatlist behaviour I'm not familiar with?<issue_comment>username_1: Have you tried adding `flex: 1` to the list so that it takes up the entire horizontal space?
Upvotes: 0 <issue_comment>username_2: in my point of view, try adding `justifyContent` as `space-between` in the style object **list**
Upvotes: 0 <issue_comment>username_3: First of all when you are styling in react native everything is already in flex so you don't have to do `display : flex`
If you want to render 7 buttons with the same width in a row make sure that all of them have the same parent. Then use `flex:1` in each one of them.
What is happening when you do `flex:1` when you put a number in front of `flex` in your styles in this particular situation it divides your parent into those many parts and give it to the child.
So all your button will have 1/7th of the width. If you put `flex:2` in one of the styles then that button will have 2/7th of the width and the rest will have 1/7th.
Please feel free to ask for more clarity on the above said.
Upvotes: 3 <issue_comment>username_4: in each item ,set this style:
```
width:width/7
```
Upvotes: 0 <issue_comment>username_5: I got the same issue, I have added View component cover for each Button like the code below:
```
//....
```
Upvotes: 3 |
2018/03/22 | 600 | 2,312 | <issue_start>username_0: I am trying to export Groupmembers list from azuread, my whole script works fairly well, but I need each new line in the final file to have the name of the file it is importing from (as well as the content it is importing)
the part of the script i am using to do this is as follows
(found this code here
[Merging multiple CSV files into one using PowerShell](https://stackoverflow.com/questions/27892957/merging-multiple-csv-files-into-one-using-powershell))
----------------------------------------------------------------------------------------------------------------------------------------------------------
```
get-childItem "C:\Users\user\Documents\Azure\Intune\management\*.csv" | foreach {
$filePath = $_
$lines = $lines = Get-Content $filePath | Select -Skip 1
$linesToWrite = switch($getFirstLine) {
$true {$lines}
$false {$lines | Select -Skip 1}
}
$getFirstLine = $false
Add-Content "C:\Users\user\Documents\Azure\Intune\management\master_list.csv" $linesToWrite
}
```<issue_comment>username_1: Have you tried adding `flex: 1` to the list so that it takes up the entire horizontal space?
Upvotes: 0 <issue_comment>username_2: in my point of view, try adding `justifyContent` as `space-between` in the style object **list**
Upvotes: 0 <issue_comment>username_3: First of all when you are styling in react native everything is already in flex so you don't have to do `display : flex`
If you want to render 7 buttons with the same width in a row make sure that all of them have the same parent. Then use `flex:1` in each one of them.
What is happening when you do `flex:1` when you put a number in front of `flex` in your styles in this particular situation it divides your parent into those many parts and give it to the child.
So all your button will have 1/7th of the width. If you put `flex:2` in one of the styles then that button will have 2/7th of the width and the rest will have 1/7th.
Please feel free to ask for more clarity on the above said.
Upvotes: 3 <issue_comment>username_4: in each item ,set this style:
```
width:width/7
```
Upvotes: 0 <issue_comment>username_5: I got the same issue, I have added View component cover for each Button like the code below:
```
//....
```
Upvotes: 3 |
2018/03/22 | 794 | 2,286 | <issue_start>username_0: I'm used to typing `!!` in bash when I want to reference the last command executed in that shell.
```
$ ls -la
drwxr-xr-x 4 me wheel 136 Jan 19 2013 wireshark_stuff
... (etc) ...
-rw-r--r-- 1 me wheel 11 Mar 13 13:51 old_PS1
$ !! |grep for_something_in_those_results
ls -la |grep for_something_in_those_results
/grep_results
```
Is there a way to do this in python?
```
>>> complicated_dict.['long_key_name'][0]
(response)
>>> my_func(!!)
```
This would get really handy as the interpreter commands become increasingly complicated. Sure, I could just use a plethora of local variables - but sometimes it's handy to just invoke the last thing run...<issue_comment>username_1: The value of the last expression evaluated in the Python shell is available as `_`, ie the single underscore.
Upvotes: 2 <issue_comment>username_2: You can use the `_` character to reference the last calculated value, and use it in other calculations:
```
>>> x = 5
>>> x + 10
15
>>> _
15
>>> _ + 2
17
```
Upvotes: 2 <issue_comment>username_3: Using default Readline bindings, `Control`-`P` + `Enter` is probably the closest exact equivalent to `!!`; the first key fetches the previous command; the second executes it. You can probably add a custom binding to `.inputrc` to execute both functions with one keystroke. Note, though, this is entirely line-oriented; if you try to use this following a multi-line `for` statement, for example, you'll only get the very last line of the body, not the entire `for` statement.
The `_` variable stores the result of the last evaluated expression; it doesn't reevaluate, though. This can be seen most clearly with something like `datetime.datetime.now`:
```
>>> datetime.datetime.now()
datetime.datetime(2018, 3, 22, 14, 14, 50, 360944)
>>> datetime.datetime.now()
datetime.datetime(2018, 3, 22, 14, 14, 51, 665947)
>>> _
datetime.datetime(2018, 3, 22, 14, 14, 51, 665947)
>>> _
datetime.datetime(2018, 3, 22, 14, 14, 51, 665947)
>>> _
datetime.datetime(2018, 3, 22, 14, 14, 51, 665947)
>>> datetime.datetime.now()
datetime.datetime(2018, 3, 22, 14, 14, 58, 404816)
```
Upvotes: 1 <issue_comment>username_4: Up-arrow / return! As long as your interpreter was compiled with `readline` support.
Upvotes: 0 |
2018/03/22 | 806 | 2,889 | <issue_start>username_0: I noticed that when an error is thrown in a try/catch block within a Google Apps Script, `Logger.log(e instanceof Error)` returns true. But, when that same object is passed back to the client in the `catch` statement, it logs `false`.
**GAS sample**
```js
function isValid() {
return false
}
function testing() {
try {
if(!isValid()) { throw new Error("failure") }
return "success"
} catch(e) {
Logger.log(e instanceof Error) // true
return e // false in the client when tested with console.log(e instanceof Error)
}
}
```
**client**
```js
function foo() {
google.script.run.withSuccessHandler(onSuccess).withFailureHandler(onFailure).testing();
}
function onSuccess(e) {
console.log(e instanceof Error) // false
console.log(e) // null
}
function onFailure(e) {
console.log(e instanceof Error) // no result
}
```
Is there a better way to test for an error returned from the script file?<issue_comment>username_1: From the client side, you use a `.withFailureHandler(...)` when calling the [`.run.myFunction()`](https://developers.google.com/apps-script/guides/html/reference/run) code. The failure handler is a function in your client side code that will be called if an exception (i.e. error) is thrown in your server-side code *and not handled*.
Failure handlers will only be called if an exception is thrown. Otherwise, the success handler receives the server-side function's `return` value.
.gs
```
function myFn() {
try {
throw new Error("failure");
}
catch (e) {
Logger.log(e);
// Must rethrow to activate the client's FailureHandler function.
throw e;
}
return "success"
}
```
.html
```
function foo() {
google.script.run.withFailureHandler(logError).withSuccessHandler(useReturnValue).myFn();
}
function logError(error) {
console.log(error);
}
function useReturnValue(value) {
// do stuff
}
```
In the client side console, you will see the error get logged.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The problem is that your `e` variable hasn't an object assigned. Please note that the official Guides use `error` as the error handling function argument but your code use `data` and instead of using it on the console statement you use `e`.
On your client-side code replace `data` by `e` or vice versa.
From <https://developers.google.com/apps-script/guides/html/reference/run#withFailureHandler(Function)>
### Code.gs
```
function doGet() {
return HtmlService.createHtmlOutputFromFile('Index');
}
function getUnreadEmails() {
// 'got' instead of 'get' will throw an error.
return GmailApp.gotInboxUnreadCount();
}
```
### index.html
```
function onFailure(error) {
var div = document.getElementById('output');
div.innerHTML = "ERROR: " + error.message;
}
google.script.run.withFailureHandler(onFailure)
.getUnreadEmails();
```
Upvotes: 0 |
2018/03/22 | 439 | 1,651 | <issue_start>username_0: I´ve cloned this sample app: <https://github.com/codekerala/laravel-and-vue.js-spa-Recipe-Box>
In RecipeController there's an index function that returns all posts (or recipes in this case). I need to get only recipes added by the current logged in user.
```
public function index()
{
$currentuser = Auth::id();
$recipes = Recipe::where('user_id', '=', $currentuser)
->get(['id', 'name', 'image']);
return response()
->json([
'recipes' => $recipes
]);
}
```
When trying this, my `recipes` array is empty, but no other errors from what I can see. I can hardcode value `1` instead of `$currentuser` and it returns all recipes that are made by user # 1.
I am declaring `use Auth;`, but am new to the Laravel framework, maybe someone could give any assistance?
Using Laravel 5.4.15<issue_comment>username_1: There might be any problem in `Auth::user()` please check it.
Add the following in your controller
```
use Illuminate\Support\Facades\Auth;
```
And add a constructor
```
public function __construct()
{
$this->middleware('auth');
}
public function index()
{
$currentuser = Auth::user();
dump($currentuser);
$recipes = Recipe::where('user_id', '=', $currentuser->id)
->get(['id', 'name', 'image']);
return response()
->json([
'recipes' => $recipes
]);
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I think there is problem in your current user. You are not able to retrieve user id
Just use this statement.
$current\_user = \Auth::user()->id;
Upvotes: 0 |
2018/03/22 | 2,076 | 5,870 | <issue_start>username_0: I have an interesting problem with one of my RoR application pages that is a bit hard to explain.
Basically, I have a page ***vendor.html.erb*** which allows the user to input data for vendors, including selecting their location (country, state, city) from dropdowns that appear depending on the country and states selected. This seems to work just fine when I refresh the web page but when I access the page through the navigation bar *(\_header.html.erb)*, the dropdowns fail to appear. The input placeholders also fail to appear when accessing the page through links.
Below is my **/app/views/static\_pages/vendor.html.erb**
```
.hidden {
display: none;
}
var data = {
countries: [{
name: 'China',
childs: [{
name: 'Beijing',
childs: [{name: 'Beijing'}, {name: 'Dongcheng'}]
}, {
name: 'Tianjin',
childs: [{name: 'Guangzhou'}, {name: 'Shanghai'}]
}]
}, {
name: 'India',
childs: [{
name: 'Uttar',
childs: [{name: 'Kanpur'}, {name: 'Ghaziabad'}]
}, {
name: 'Maharashtra',
childs: [{name: 'Mumbai'}, {name: 'Pune'}]
}]
}, {
name: 'USA',
childs: [{
name: 'Washington',
childs: [{name: 'Washington'}, {name: 'Seatle'}]
}, {
name: 'Florida',
childs: [{name: 'Orlando'}, {name: 'Miami'}]
}]
}]
};
function buildSelect(name, data, childs) {
var div = $('<div>');
div.addClass('hidden autoSelect ' + data.name + ' ' + name);
var label = $('<label>');
label.text(name);
var select = $('<select>');
var option = $('<option>');
option.text('--');
select.append(option);
data.childs.forEach(function (child) {
option = $('<option>');
option.val(child.name);
option.text(child.name);
select.append(option);
});
if (childs) select.on('change', updateCities);
label.append(select);
div.append(label);
$('.country').append(div);
}
function buildForms(data) {
data.countries.forEach(function (country) {
buildSelect('State', country, true);
country.childs.forEach(function (state) {
buildSelect('City', state);
});
});
}
function hideAutoSelect (name) {
$('div.autoSelect.'+name).addClass('hidden');
}
function updateStates() {
var v = this.value;
if (v) {
hideAutoSelect('State');
hideAutoSelect('City');
var div = $('div.autoSelect.'+v);
div.removeClass('hidden');
var select = $('select', div);
if (select.val()) $('div.autoSelect.'+select.val()).removeClass('hidden');
}
}
function updateCities() {
var v = $(this).val();
if (v) {
hideAutoSelect('City');
$('div.autoSelect.'+v).removeClass('hidden');
}
}
$(document).on('ready',function () {
buildForms(data);
$('[name=country]').on('change', updateStates);
});
### Add Vendor
Create vendor ID\*
Vendor name\*
Vendor type\*
--
Vendor origin\*
--
China
India
USA
```
Below is my **/app/views/layouts/\_header.html.erb**
```
\* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
html{
min-width: 100%;
min-height: 100%;
width: auto;
height: auto;
}
body{
padding: 80px;
}
header {
padding: 10px;
top: 0px;
left: 0px;
margin: 0;
background: #fff;
min-width: 100%;
z-index: 1;
justify-content: center;
position: fixed;
display: flex;
}
.nav {
background: #232323;
height: 60px;
\*display:inline;
\*zoom:1;
width: 60%;
margin: 0;
padding: 0;
text-align: center;
vertical-align: top;
}
.nav li {
display: inline;
float: left;
list-style-type: none;
position: relative;
}
.nav li a {
font-size: 14px;
color: white;
display: block;
line-height: 60px;
padding: 0 26px;
text-decoration: none;
border-left: 1px solid #2e2e2e;
font-family: Arial;
text-shadow: 0 0 1px rgba(255, 255, 255, 0.5);
}
.nav li a:hover {
background-color: #2e2e2e;
}
#search {
width: 357px;
margin: 4px;
}
#search\_text{
width: 297px;
padding: 15px 0 15px 20px;
font-size: 16px;
font-family: Arial;
border: 0 none;
height: 52px;
margin-right: 0;
color: white;
outline: none;
background: #494949;
float: left;
box-sizing: border-box;
transition: all 0.15s;
}
::-webkit-input-placeholder { /\* WebKit browsers \*/
color: white;
}
:-moz-placeholder { /\* Mozilla Firefox 4 to 18 \*/
color: white;
}
::-moz-placeholder { /\* Mozilla Firefox 19+ \*/
color: white;
}
:-ms-input-placeholder { /\* Internet Explorer 10+ \*/
color: white;
}
#search\_text:focus {
background: #5a5a5a;
}
#options a{
border-left: 0 none;
}
.subnav {
visibility: hidden;
position: absolute;
top: 110%;
right: 0;
width: 200px;
height: auto;
opacity: 0;
z-index: 1;
transition: all 0.1s;
background: #232323;
}
.subnav li {
float: none;
}
.subnav li a {
border-bottom: 1px solid #2e2e2e;
}
#options:hover .subnav {
visibility: visible;
top: 100%;
opacity: 1;
}
button {
display: inline-block;
padding: 10px;
}
* [Home](/home)
<% if logged\_in? %>
* <%= link\_to "Profile", edit\_user\_path(current\_user.id) %>
<%end%>
* [Add](#)
+ <%= link\_to "Part", add\_parts\_path%>
+ <%= link\_to "Project", add\_projects\_path%>
+ <%= link\_to "Vendor", add\_vendors\_path%>
* <%= link\_to "Contact", inquires\_path, method: :get %>
* <%= link\_to "Log Out", logout\_path, method: :delete %>
*
```<issue_comment>username_1: There might be any problem in `Auth::user()` please check it.
Add the following in your controller
```
use Illuminate\Support\Facades\Auth;
```
And add a constructor
```
public function __construct()
{
$this->middleware('auth');
}
public function index()
{
$currentuser = Auth::user();
dump($currentuser);
$recipes = Recipe::where('user_id', '=', $currentuser->id)
->get(['id', 'name', 'image']);
return response()
->json([
'recipes' => $recipes
]);
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I think there is problem in your current user. You are not able to retrieve user id
Just use this statement.
$current\_user = \Auth::user()->id;
Upvotes: 0 |
2018/03/22 | 1,490 | 5,473 | <issue_start>username_0: My code begins by signing me into PayPal, then signing into eBay and navigating to the pay fees page, then checking out with PayPal. The final "Continue" button I can't click/submit. I've tried by xpath, id and class. I even tried sending TAB 7x until the Continue button and then sending Enter but that didn't work.
I have found this discussion but I'm not sure how to make it work for me.
[PayPal Sandbox checkout 'continue button' - Unable to locate element: - C# WebDriver](https://stackoverflow.com/questions/39492424/paypal-sandbox-checkout-continue-button-unable-to-locate-element-c-sharp)
Here's a screenshot of the PayPal code and page I'm trying to do.
[![screenshot](https://i.stack.imgur.com/ySeDv.png)](https://i.stack.imgur.com/ySeDv.png)
```
//Chrome WebDriver specific
System.setProperty("webdriver.chrome.driver", "C:\\automation\\drivers\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.manage().window().maximize(); //maximise webpage
WebDriverWait wait = new WebDriverWait(driver, 20);
//navigate to Paypal
driver.get("https://www.paypal.com/uk/signin");
//wait 2.5s for the page to load
try {
Thread.sleep(2500);
}
catch (Exception e) {
e.printStackTrace();
}
WebElement paypalEmail = driver.findElement(By.id("email"));
paypalEmail.sendKeys("******");
//wait 2.5s for the page to load
try {
Thread.sleep(2500);
}
catch (Exception e) {
e.printStackTrace();
}
WebElement paypalSubmit = driver.findElement(By.id("btnNext"));
paypalSubmit.click();
String URL = ("https://www.paypal.com/uk/signin");
driver.get(URL);
WebElement form2 = driver.findElement(By.cssSelector(".main form"));
WebElement username = form2.findElement(By.id("password"));
username.sendKeys("******");
WebElement paypalSubmit2 = driver.findElement(By.id("btnLogin"));
paypalSubmit2.click();
//navigate to Ebay
driver.get("https://signin.ebay.co.uk/ws/eBayISAPI.dll?SignIn&ru=https%3A%2F%2Fwww.ebay.com%2F");
// Enter user name , password and click on Signin button
WebElement form = wait.until(ExpectedConditions.presenceOfElementLocated(By.cssSelector("#mainCnt #SignInForm")));
form.findElement(By.cssSelector("input[type=text][placeholder='Email or username']")).sendKeys("******");
form.findElement(By.cssSelector("input[type=password]")).sendKeys("******");
form.findElement(By.id("sgnBt")).click();
driver.get("http://cgi3.ebay.co.uk/ws/eBayISAPI.dll?OneTimePayPalPayment");
//WebElement Pay =
driver.findElement(By.xpath("//input[@value='Pay']")).click();
WebDriverWait wait2 = new WebDriverWait(driver, 15);
wait2.until(ExpectedConditions.elementToBeClickable(By.xpath("//*[@id=\"confirmButtonTop\"]")));
driver.findElement(By.xpath("//*[contains(@id,'confirmButtonTop')]")).click();
}
}
```<issue_comment>username_1: Based on your given screenshot one of following should work to click on continue button :
**Method 1 :**
```
WebElement paypalSubmit = driver.findElement(By.xpath("//input[@data-test-id='continueButton']"));
paypalSubmit.click();
```
**Method 2:**
```
By paypalButton=By.xpath("//input[@data-test-id='continueButton']"));
WebElement element=driver.findElement(paypalButton);
JavascriptExecutor js = (JavascriptExecutor) driver;
js.executeScript("arguments[0].scrollIntoView(true);",element);
js.executeScript("arguments[0].click();", element);
```
Try 2nd method if you feel your button require bit scroll to bottom to get clickable.
one more xpaths you can use for button if above don't work :
```
//input[@value='Continue' and @id='confirmButtonTop']
```
Upvotes: 2 <issue_comment>username_2: Sometimes the conventional `click()` doesn't work. In that case, try using the Javascript Executor Click as below.
Make sure you import this class
```
org.openqa.selenium.JavascriptExecutor
```
And use this instead of `click();`
```
JavascriptExecutor executor = (JavascriptExecutor) driver;
executor.executeScript("arguments[0].click();", driver.findElement(By.xpath(“//input[@data-test-id='continueButton']”)));
```
Try this and let me know if this works for you.
Upvotes: 2 [selected_answer]<issue_comment>username_3: In my experience, paypal likes to use iFrames. If that's true in your case, that means unless you tell webdriver to switch frame contexts, that paypal form will be unavailable to you regardless of your xpath/css selectors.
You can get a list of all available frames currently loaded with this code:
```
String[] handles = driver.getWindowHandles()
```
Your actual page will always be the 0th index in that returned array. If paypal is your only iFrame, then you can target the 1th index. Here's a possible solution to that:
```
String mainPageHandle = handles[0];
String paypalHandle = handles[1];
driver.switchTo().window(paypalHandle);
// Do paypal interactions
driver.switchTo().window(mainPageHandle);
// Back to the main page
```
There are definitely more robust ways to handle this, and if your page unfortunately has more than one iFrame, then you may need to do more to verify which handle is which, such as test the presence of an element you know is contained within. In general, the frames will load in the same order every time. As a golden path to this problem, this will get you in and out of that iFrame to perform work.
Upvotes: 2 |
2018/03/22 | 518 | 1,568 | <issue_start>username_0: Is there a way how to write a JavaScript regular expression that would recognize **.ts** file extension for this:
```
"file.ts"
```
but would fail on this:
```
"file.vue.ts"
```
What I need is, if the file name ends with **.vue.ts**, it shouldn't be handled as a **.ts** file.
I've tried a lot of things with no success.
Update: It needs to be a regular expression, because that's what I'm passing to a parameter of a function.<issue_comment>username_1: You could look for a previous coming dot and if not return `true`.
```js
console.log(["file.ts", "file.vue.ts"].map(s => /^[^.]+\.ts$/.test(s)));
```
Upvotes: 2 <issue_comment>username_2: Regex for that is `^[^.]+.ts$`
```js
var x=/^[^.]+.ts$/;
console.log(x.test("file.ts"));
console.log(x.test("file.vue.ts"));
console.log(x.test("file.vue.ts1"));
```
Explanation:-
```
^[^.]+.ts$
^ ---> start of line
[^.]+ ---> match anything which is not '.' (match atleast one character)
^[^.]+ ---> match character until first '.' encounter
.ts ---> match '.ts'
$ ---> end of line
.ts$ ---> string end with '.ts'
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: This will work except for special characters. Will allow for uppercase letters, lowercase letters, numbers, underscores, and dashes:
```
^[a-zA-Z0-9\_\-]+(\.ts)$
```
Upvotes: 1 <issue_comment>username_4: ```
const regex = /(.*[^.vue]).ts/g;
abc.ts.ts Matches
abc.xyx.htm.ts Matches
abc.vue.ts Fails
xyz.abx.sxc.vue.ts Fails
```
Javascript regex should be this one.
Upvotes: 1 |
2018/03/22 | 628 | 2,529 | <issue_start>username_0: I was monitoring my application with the task manager and saw that the RAM usage is constantly going up.
I quickly noticed that I initialized a new variable in every iteration of the for loop:
```
for (int i = 0; i < List.size(); i++)
{
data = List.get(i); //This is already declared outside
CustomThread thread2= new CustomThread(data);
executor.execute(thread2);
}
```
Now, does declaring `CustomThread thread2` outside of the loop be of any benefit?
Would the `execute(thread2)` be still connected to the original object when accessing it later? Or maybe, the object is overwritten, and gets overwritten for every `execute`, causing it to do the same exact thing (the behaviour is based on the data) ?
EDIT:
I roughly need 200 threads (they are not too heavy but they do connect to a website), also, the snippet and the code before it, is a `runnable` that gets called every 20 seconds (so yeah, I'd create roughly 600 threads per minute)<issue_comment>username_1: Where you *declare the variable* doesn't matter except for scoping; where you *create the object* matters, because presumably you need to call `executor.execute` with *different* `CustomThread` objects on each call. (If you don't, then yes, obviously, create a single object outside the loop and reuse it.)
The way you have it is how I'd write it, except:
1. You might just do away with the variable entirely if you don't need it for anything else:
```
executor.execute(new CustomThread(data));
```
2. I might declare `data` within the loop body unless you have a good reason for declaring it outside.
...since there's no benefit to broadening the scope of that `thread2` variable if you don't need it outside the loop.
Upvotes: 2 [selected_answer]<issue_comment>username_2: No.
When you use the keyword **new** you create a new memory address, so you will be always creating a new thread for each iteration.
Upvotes: 0 <issue_comment>username_3: It is rather clear that the actual code doesn't cause **directly** the memory consumption issue.
`data` is an element of the `List`. Assigning it a temporary variable doesn't create a new object. It just creates a "link" to the actual object.
Your problem is probably related to what you perform in your threads.
Creating and running hundreds of threads concurrently that create objects may finally created a number very important of objects.
Rather than guessing, use a monitoring tool as JVisualVM and you should easily find the culprit.
Upvotes: 0 |
2018/03/22 | 365 | 1,391 | <issue_start>username_0: I am storing my data in S3 bucket provided from Amazon. I am performing analytics in R studio by creating instance.
I am storing my desired result back in my S3 bucket.
I would like to create an front end in visual studio for my results in R. Could anyone guide me on how I could proceed with this ?
I have created the AWS explorer in visual studio.
Any lead and guidance for this idea would be helpful.<issue_comment>username_1: [![Shiny](https://i.stack.imgur.com/IjbsK.gif)](https://i.stack.imgur.com/IjbsK.gif)
But seriously, take a look at [shiny](https://shiny.rstudio.com/)
Plagiarizing from that link: "Shiny is an R package that makes it easy to build interactive web apps straight from R. You can host standalone apps on a webpage or embed them in R Markdown documents or build dashboards. You can also extend your Shiny apps with CSS themes, htmlwidgets, and JavaScript actions."
Upvotes: 2 <issue_comment>username_2: Options (not in any order):
1. [Shiny](https://shiny.rstudio.com/) is an R package that makes it easy to build interactive web apps straight from R
2. If front-end is all about reports/dashboard, you can push your results in [ElasticSearch](https://aws.amazon.com/elasticsearch-service/) and then use [Kibana](https://aws.amazon.com/elasticsearch-service/kibana/) on top to present the results as a dashboard.
HIH
Upvotes: 0 |
2018/03/22 | 646 | 2,332 | <issue_start>username_0: I'm working with [React Native elements searchbar](https://react-native-training.github.io/react-native-elements/docs/0.19.0/searchbar.html) and am struggling to get these two little lines on the top and bottom to go away - I can't figure out what they are:
[Weirdly formatted Searchbar image here](https://i.stack.imgur.com/SzZCt.png)
This is my searchbar code:
```
this.searchBar = ref}
style= {styles.searchbar}
lightTheme round
containerStyle={styles.searchcontainer}
/>
```
And here are my two style snippets:
```
searchcontainer: {
backgroundColor: 'white',
borderWidth: 0, //no effect
shadowColor: 'white', //no effect
},
searchbar: {
width: "100%",
backgroundColor: 'red', //no effect
borderWidth:0, //no effect
shadowColor: 'white', //no effect
},
```
If I change the theme from `lightTheme` to the default, the lines become darker grey so I know it's related to the `SearchBar` element itself but hasn't been able to get rid of it by changing the border or shadow.
Wondering if anyone has experienced anything like this before, thanks in advance!<issue_comment>username_1: Use `borderBottomColor` and `borderTopColor` as `transparent` with `searchcontainer`
```
searchcontainer: {
backgroundColor: 'white',
borderWidth: 0, //no effect
shadowColor: 'white', //no effect
borderBottomColor: 'transparent',
borderTopColor: 'transparent'
}
```
Hope this will help
Upvotes: 7 [selected_answer]<issue_comment>username_2: in new version of react native elements
```
containerStyle={{
backgroundColor:"#FBFBFB",
borderBottomColor: 'transparent',
borderTopColor: 'transparent'
}}
```
Upvotes: 2 <issue_comment>username_3: Full Code :
```
import {SearchBar} from 'react-native-elements';
}
/>
searchBarContainer: {
backgroundColor: COLORS.SEARCHBAR,
alignSelf: 'center',
flexDirection: 'row',
flex: 1,
justifyContent: 'center',
alignItems: 'center',
alignContent: 'center',
borderBottomColor: 'transparent',
borderTopColor: 'transparent',
},
```
Upvotes: 0 <issue_comment>username_4: For anyone else looking to remove those borders try setting the width of every border separately:
```
containerStyle={{
borderWidth: 0, //no effect
borderTopWidth: 0, //works
borderBottomWidth: 0, //works
}}
```
Upvotes: 3 |
2018/03/22 | 816 | 2,442 | <issue_start>username_0: I'm trying to test if one of my variables is pd.NaT. I know it is NaT, and still it won't pass the test. As an example, the following code prints nothing :
```
a=pd.NaT
if a == pd.NaT:
print("a not NaT")
```
Does anyone have a clue ? Is there a way to effectively test if `a` is NaT?<issue_comment>username_1: Pandas `NaT` behaves like a floating-point `NaN`, in that it's not equal to itself. Instead, you can use `pandas.isnull`:
```
In [21]: pandas.isnull(pandas.NaT)
Out[21]: True
```
This also returns `True` for None and NaN.
Technically, you could also check for Pandas `NaT` with `x != x`, following a common pattern used for floating-point NaN. However, this is likely to cause issues with NumPy NaTs, which look very similar and represent the same concept, but are actually a different type with different behavior:
```
In [29]: x = pandas.NaT
In [30]: y = numpy.datetime64('NaT')
In [31]: x != x
Out[31]: True
In [32]: y != y
/home/i850228/.local/lib/python3.6/site-packages/IPython/__main__.py:1: FutureWarning: In the future, NAT != NAT will be True rather than False.
# encoding: utf-8
Out[32]: False
```
`numpy.isnat`, the function to check for NumPy `NaT`, also fails with a Pandas `NaT`:
```
In [33]: numpy.isnat(pandas.NaT)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in ()
----> 1 numpy.isnat(pandas.NaT)
TypeError: ufunc 'isnat' is only defined for datetime and timedelta.
```
`pandas.isnull` works for both Pandas and NumPy NaTs, so it's probably the way to go:
```
In [34]: pandas.isnull(pandas.NaT)
Out[34]: True
In [35]: pandas.isnull(numpy.datetime64('NaT'))
Out[35]: True
```
Upvotes: 9 [selected_answer]<issue_comment>username_2: You can also use pandas.isna() for pandas.NaT, numpy.nan or None:
```
import pandas as pd
import numpy as np
x = (pd.NaT, np.nan, None)
[pd.isna(i) for i in x]
Output:
[True, True, True]
```
Upvotes: 4 <issue_comment>username_3: ```
pd.NaT is pd.NaT
```
True
this works for me.
Upvotes: 5 <issue_comment>username_4: If it's in a `Series` (e.g. `DataFrame` column) you can also use `.isna()`:
```
pd.Series(pd.NaT).isna()
# 0 True
# dtype: bool
```
Upvotes: 3 <issue_comment>username_5: This is what works for me
```
>>> a = pandas.NaT
>>> type(a) == pandas._libs.tslibs.nattype.NaTType
>>> True
```
Upvotes: 1 |
2018/03/22 | 849 | 3,475 | <issue_start>username_0: In what way(s), if any, does the C++ standard limit the effect of undefined behavior? For instance, in the code below, from the first `if` inspecting `undefined` is control flow constrained to follow either the then path or the else path? Is it allowed to skip both paths? Execute both paths (potentially in parallel)? Take a wild jump into the middle of the second `if`?
```
void f(int undefined) {
bool startNuclearWar = true;
if (undefined > 0) {
printf("True path\n");
startNuclearWar = false;
} else {
printf("False path\n");
startNuclearWar = false;
}
if (startNuclearWar) {
lauchMissles();
}
}
```<issue_comment>username_1: The standard has no constraints on UB. The moment you do *anything* which invokes UB, the standard guarantees nothing about what happens.
Upvotes: 2 <issue_comment>username_2: While there are many situations where it would be useful to be able to "compartmentalize" undefined behavior, and while doing so would be inexpensive on many platforms, the people writing Standards have not shown any significant interest in doing so. The C11 Standard offers Annex L about "analyzability" but it fails to describe anything meaningful that an implementation would have to guarantee if defines `__STDC_ANALYZABLE__`.
The fact that integer overflow is Bounded Undefined Behavior, for example, would be of limited use without clean a way to ensure that code like the following:
```
int index = computeSomething();
if (index < 0 || index >= ARRAYSIZE) FatalError();
myArray[index]++;
```
will use the same value for `index` in the comparison and in the array lookup.
Many implementations could cheaply offer many useful guarantees beyond those required by the Standard, especially in application fields where it would be acceptable for a program to abnormally terminate when given invalid input, but not acceptable for it to let maliciously-constructed input take control of the machine. Unfortunately, the Standard fails to provide the necessary hooks to efficiently take advantage of that (e.g. providing an intrinsic which would take a value that may be Indeterminate and produce a value which is at worst Unspecified). Applying such an intrinsic to the value in `index` in the code above before performing the comparison would ensure that even if an overflow occurs in `computeSomething`, code would be guaranteed to either increment a value within the array, or notice that `index` was invalid. Since neither operation would results in Critical Undefined Behavior, execution would thus stay on the rails.
Upvotes: 0 <issue_comment>username_3: >
> In what way(s), if any, does the C++ standard limit the effect of undefined behavior?
>
>
>
None whatsoever. Undefined behavior is **undefined** by its nature, so absolutely *anything* can happen. And once something *undefined* has happened, the state of the program is *unknown* from that point on.
That being said, there is no undefined behavior in the code you have showed. Everything in the code has defined behavior.
>
> For instance, in the code below, from the first `if` inspecting `undefined` is control flow constrained to follow either the then path or the else path?
>
>
>
Yes.
>
> Is it allowed to skip both paths?
>
>
>
No.
>
> Execute both paths (potentially in parallel)?
>
>
>
No.
>
> Take a wild jump into the middle of the second if?
>
>
>
No.
Upvotes: 0 |
2018/03/22 | 827 | 3,377 | <issue_start>username_0: I created a loading page for my website but I need some help.
The loading page is visible until the full HTML page is loaded and then it fades out. My problem is, I have a video as a background and I would like to make the loading page visible until my video in the background is loaded.
Is that possible? if you can help me, give advice or other, will be grateful.
Frage
JS Script for fadeout
```
$(window).on('load', function(){
$('.loading').fadeOut(500);
});
```
With .loading my css of my div with the loading page content.
Video is after, in the HTML body.<issue_comment>username_1: The standard has no constraints on UB. The moment you do *anything* which invokes UB, the standard guarantees nothing about what happens.
Upvotes: 2 <issue_comment>username_2: While there are many situations where it would be useful to be able to "compartmentalize" undefined behavior, and while doing so would be inexpensive on many platforms, the people writing Standards have not shown any significant interest in doing so. The C11 Standard offers Annex L about "analyzability" but it fails to describe anything meaningful that an implementation would have to guarantee if defines `__STDC_ANALYZABLE__`.
The fact that integer overflow is Bounded Undefined Behavior, for example, would be of limited use without clean a way to ensure that code like the following:
```
int index = computeSomething();
if (index < 0 || index >= ARRAYSIZE) FatalError();
myArray[index]++;
```
will use the same value for `index` in the comparison and in the array lookup.
Many implementations could cheaply offer many useful guarantees beyond those required by the Standard, especially in application fields where it would be acceptable for a program to abnormally terminate when given invalid input, but not acceptable for it to let maliciously-constructed input take control of the machine. Unfortunately, the Standard fails to provide the necessary hooks to efficiently take advantage of that (e.g. providing an intrinsic which would take a value that may be Indeterminate and produce a value which is at worst Unspecified). Applying such an intrinsic to the value in `index` in the code above before performing the comparison would ensure that even if an overflow occurs in `computeSomething`, code would be guaranteed to either increment a value within the array, or notice that `index` was invalid. Since neither operation would results in Critical Undefined Behavior, execution would thus stay on the rails.
Upvotes: 0 <issue_comment>username_3: >
> In what way(s), if any, does the C++ standard limit the effect of undefined behavior?
>
>
>
None whatsoever. Undefined behavior is **undefined** by its nature, so absolutely *anything* can happen. And once something *undefined* has happened, the state of the program is *unknown* from that point on.
That being said, there is no undefined behavior in the code you have showed. Everything in the code has defined behavior.
>
> For instance, in the code below, from the first `if` inspecting `undefined` is control flow constrained to follow either the then path or the else path?
>
>
>
Yes.
>
> Is it allowed to skip both paths?
>
>
>
No.
>
> Execute both paths (potentially in parallel)?
>
>
>
No.
>
> Take a wild jump into the middle of the second if?
>
>
>
No.
Upvotes: 0 |
2018/03/22 | 797 | 2,228 | <issue_start>username_0: So i'm trying to use i in the name of my struct entries so I can increment it everytime I make a new entry. However, it says I must use a constant value.
But obviously if I use #define i (2) for example, I wouldn't be able to increment it since it's constant.
```
struct order {
char orderName[15];
int orderQuantity;
int orderUnderTen;
int orderUnderSixteen;
int orderStudent;
int orderOverSixty;
int orderNA;
double orderTotal;
};
int i;
struct order s[i] = { "John", 5, 0, 0, 0, 0, 5, 25.00 };
```
Am I going about this in the wrong way?
Is there a better way to increment so I can store new entries?
Thank you.<issue_comment>username_1: You can perform an assignment to an entire structure if you use a **compound literal**:
```
struct order s[i] = (struct order){ "John", 5, 0, 0, 0, 0, 5, 25.00 };
```
Upvotes: 0 <issue_comment>username_2: Value `i` has an unknown, potentially random value, because you have not set it yet!
Is this line trying to set the first element of the array, or the 10th element?
```
struct order s[i] = { "John", 5, 0, 0, 0, 0, 5, 25.00 };
```
With no valid value for `i`, it could be anything!
You must first decide how big you want your array to be.
```
int i = 0;
struct order s[10] = { { "John", 5, 0, 0, 0, 0, 5, 25.00 },
{ "Mark", 6, 0, 0, 0, 0, 6, 30.00 },
{ "Luke", 7, 0, 0, 0, 0, 7, 35.00 } };
s[3] = (struct order){ "Matt", 8, 0, 0, 0, 0, 8, 40.00 };
i = 4;
s[i] = (struct order){ "Mike", 9, 0, 0, 0, 0, 9, 45.00 };
i++;
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: So you want an array. If you are defining everything up front in your program code, you can let it create an array with automatic size, like this:
```
#include
struct data {
int id;
char name[256];
};
struct data data[] = {
{1, "Jon"},
{2, "Fred"},
};
const size\_t data\_len = sizeof(data) / sizeof(data[0]);
```
Now you can add more lines to "data" and it automatically figures out how big to make the array when it compiles the code.
In a realistic program you would load these from a data file or database and you would need to use `malloc` and `realloc` to allocate and grow a dynamic array.
Upvotes: 1 |
2018/03/22 | 775 | 2,828 | <issue_start>username_0: I am new to Marklogic and My Requirement is, I have XML document in Marklogic Database containing multiple elements.
```
** Example :**
Source1
Action\_Type1
Sequence\_Number1
```
When i pass "**Sequence\_Number**"it should return **/tXML/Header/Sequence\_Number**
and it is possible to have "**Sequence\_Number**" element multiple times.
Please tell me whether it is possible using Marklogic XQuery/Java API or i need to uses any third party API to obtain this result.<issue_comment>username_1: If you pass the `String` "Sequence\_Number" then how could it know which of the `Sequence_Number` siblings to get the XPath of? You could be talking about the first occurrence or maybe the second or third, it wont know. Given just a tagName, it can't know which occurrence to get the XPath for.
That said, the below method will give you the general XPath to where an element lives. You just need to get the `org.w3c.dom.Element` first, which can be gotten from the `org.w3c.dom.Document`.
```
public static String getXPathOfElement(org.w3c.dom.Element el) {
Objects.requireNonNull(el);
LinkedList list = new LinkedList<>();
for (Node n = el; n != null; n = n.getParentNode()) {
if (n.getNodeType() == Node.ELEMENT\_NODE)
list.push(n.getNodeName());
else
list.push("");
}
return String.join("/", list);
}
public static void main(String[] args)
throws ParserConfigurationException, SAXException, IOException {
// Get an input stream of your Xml somehow
String xml =
""
+ ""
+ "Source1"
+ "Action\_Type1"
+ "Sequence\_Number1"
+ ""
+ "";
java.io.InputStream xmlInputStream = new java.io.ByteArrayInputStream(xml.getBytes());
// Get the Document from the xml InputStream
javax.xml.parsers.DocumentBuilderFactory docBuilderFact =
javax.xml.parsers.DocumentBuilderFactory.newInstance();
javax.xml.parsers.DocumentBuilder docBuilder = docBuilderFact.newDocumentBuilder();
org.w3c.dom.Document doc = docBuilder.parse(xmlInputStream);
// Get the Element you want the general XPath of
// In this case just find the first one with a certain tag in the Document
org.w3c.dom.Element el =
(org.w3c.dom.Element) doc.getElementsByTagName("Sequence\_Number").item(0);
System.out.println(getXPathOfElement(el));
}
```
Upvotes: 0 <issue_comment>username_2: It probably won't perform well on large documents, but you could use string matching on element names to find the elements, and then use [`xdmp:path`](http://docs.marklogic.com/xdmp:path) to get the corresponding XPath for that document. Something like:
```
xquery version "1.0-ml";
let $xml := document {
Source1
Action\_Type1
Sequence\_Number1
}
let $elem-name := "Sequence_Number"
let $elems := $xml//*[local-name() eq $elem-name]
return $elems ! xdmp:path(.)
```
HTH!
Upvotes: 3 [selected_answer] |
2018/03/22 | 490 | 1,494 | <issue_start>username_0: I thought I understood how this would work but apparently I don't. Take this example:
File 1 (sandbox.py):
```
from module1 import Testy
class Sandy(Testy):
def run(self):
print("This is value of x in Sandy: %d" % x)
super(Sandy, self).mymet()
if __name__ == "__main__":
x = 4
test = Sandy()
test.run()
```
File 2 (module1.py):
```
class Testy(object):
def mymet(self):
print("This is the value of x in Testy %d: " % x)
```
This is what I receive back in the console when running sandbox.py:
>
> This is value of x in Sandy: 4
>
> NameError: global name 'x' is not defined
>
>
>
Is the best/only way of doing this to pass the x argument explicitly to mymet() in the parent class?<issue_comment>username_1: The issue is your base class `Testy`
the variable `x` is not defined in class scope and so it doesn't exist in your subclasses.
```
class Testy(object):
self.x = 4
def mymet(self):
print("This is the value of x in Testy %d: " % self.x)
```
Upvotes: -1 <issue_comment>username_2: You need to pass it to both functions as a parameter:
```
class Testy(object):
def mymet(self, x):
print("This is the value of x in Testy %d: " % x)
class Sandy(Testy):
def run(self, x):
print("This is value of x in Sandy: %d" % x)
super(Sandy, self).mymet(x)
if __name__ == "__main__":
x = 4
test = Sandy()
test.run(x)
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 982 | 2,404 | <issue_start>username_0: I want to multiply an array along it's first axis by some vector.
For instance, if a is 2D, b is 1D, and a.shape[0] == b.shape[0], we can do:
```
a *= b[:, np.newaxis]
```
What if a has an arbitrary shape? In numpy, the ellipsis "..." can be interpreted as "fill the remaining indices with ':'". Is there an equivalent for filling the remaining axes with None/np.newaxis?
The code below generates the desired result, but I would prefer a general vectorized way to accomplish this without falling back to a for loop.
```
from __future__ import print_function
import numpy as np
def foo(a, b):
"""
Multiply a along its first axis by b
"""
if len(a.shape) == 1:
a *= b
elif len(a.shape) == 2:
a *= b[:, np.newaxis]
elif len(a.shape) == 3:
a *= b[:, np.newaxis, np.newaxis]
else:
n = a.shape[0]
for i in range(n):
a[i, ...] *= b[i]
n = 10
b = np.arange(n)
a = np.ones((n, 3))
foo(a, b)
print(a)
a = np.ones((n, 3, 3))
foo(a, b)
print(a)
```<issue_comment>username_1: Just reverse the order of the axes:
```
transpose = a.T
transpose *= b
```
`a.T` is a transposed view of `a`, where "transposed" means reversing the order of the dimensions for arbitrary-dimensional `a`. We assign `a.T` to a separate variable so the `*=` doesn't try to set the `a.T` attribute; the results still apply to `a`, since the transpose is a view.
Demo:
```
In [55]: a = numpy.ones((2, 2, 3))
In [56]: a
Out[56]:
array([[[1., 1., 1.],
[1., 1., 1.]],
[[1., 1., 1.],
[1., 1., 1.]]])
In [57]: transpose = a.T
In [58]: transpose *= [2, 3]
In [59]: a
Out[59]:
array([[[2., 2., 2.],
[2., 2., 2.]],
[[3., 3., 3.],
[3., 3., 3.]]])
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Following the idea of the accepted answer, you could skip the variable assignment to the transpose as follows:
```py
arr = np.tile(np.arange(10, dtype=float), 3).reshape(3, 10)
print(arr)
factors = np.array([0.1, 1, 10])
arr.T[:, :] *= factors
print(arr)
```
Which would print
```py
[[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]]
[[ 0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. ]
[ 0. 10. 20. 30. 40. 50. 60. 70. 80. 90. ]]
```
Upvotes: 1 |
2018/03/22 | 1,873 | 6,820 | <issue_start>username_0: How can I load images in a component in Next.js? Do I have to build the project first? If yes, is there a way to load the images without building first? I cannot get this to work, no matter what I try.<issue_comment>username_1: [from the docs:](https://nextjs.org/docs/basic-features/static-file-serving)
>
> Next.js can serve static files, like images, under a folder called
> public in the root directory. Files inside public can then be
> referenced by your code starting from the base URL (/).
>
>
>
So, first add an image to `public/my-image.png` and then you can reference it:
```
![](/my-image.png)
```
I think next.js will have a watch on this directory so you won't need to re`start` your server every time you put something in there.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Another way I find out [Next Images](https://www.npmjs.com/package/next-images)
**installation**:
`npm install --save next-images`
or
`yarn add next-images`
**Usage**:
Create a `next.config.js` in your project
```
// next.config.js
const withImages = require('next-images')
module.exports = withImages()
```
Optionally you can add your custom Next.js configuration as parameter
```
// next.config.js
const withImages = require('next-images')
module.exports = withImages({
webpack(config, options) {
return config
}
})
```
And in your components or pages simply import your images:
```
export default () =>
![]({require('./my-image.jpg')})
```
or
```
import myImg from './my-image.jpg'
export default () =>
![]({myImg})
```
Upvotes: 4 <issue_comment>username_3: The static directory has been deprecated. Place files in `public/static` directory
Upvotes: 4 <issue_comment>username_4: With Next 10+
To serve an optimized image:
```
import Image from 'next/image'
```
Place the image in the public folder. All the referenced images must be present in the public folder at the build time. Image hot deployment will not work for images that reside in the public folder.
You also can refer to cross-domain images with tag.
```
```
To allow cross-domain images, ensure to add the below entry to your `next.config.js`
```
module.exports = {
images: {
domains: ['www.example.com'],
},
```
}
Upvotes: 2 <issue_comment>username_5: what i like to do for directing to images is using `environment variables`. in next.js they are easily set in `next.config.js` file like below:
```
// next.config.js
module.exports = {
env: {
PUBLIC_URL: '/',
}
};
```
then you can direct to your publics path wherever it is by using `process.env.PUBLIC_URL` like below:
```
![]({`${process.env.PUBLIC_URL}/my-image.jpg`})
```
the advantages of using PUBLIC\_URL environment variable over hard coding the path is that you can use another path for when file arrangements change (like in server). for then you could set conditionally which PUBLIC\_URL value to use in production and development.
---
update
------
sometimes the problem of images used with next/Image not showing is bc of not setting the right `layout` value or it lacks `width` and `height` attributes when used with `layout` other than `fill`.
Using Image component of Next.js version 13 is a little bit different than than its previous versions. It's actually easier and you can use optimization features with less effort and work arounds. In this version :
* you're not obligated to set `domains` in `next.config.js`.
* you can either set the image its width and height or set to `fill` and handle its sizing with styles or classNames which means you can set max-height or max-width. so in that case that you don't know your image's width and height it's be shown properly.
* as well as its previous versions you can use `priority` and ...
Upvotes: 2 <issue_comment>username_6: From Next.js v11 onwards, one can now directly `import` images without any additional config or dependencies. Official example (comment mine):
```js
import Image from 'next/image'
import profilePic from '../public/me.png'
function Home() {
return (
<>
My Homepage
===========
{/* ![Picture of the author]({profilePic.src}) */}
Welcome to my homepage!
)
}
export default Home
```
Docs: [`next/image`](https://nextjs.org/docs/api-reference/next/image)
Upvotes: 4 <issue_comment>username_7: I will add here one obvious case, that is usually easily forgotten. It keeps appearing when one re-structure a site page and our IDE "silently fails" to update the paths of a related file/component or simply when one is more tired or distracted.
If you are using a page inside a folder
ex: `mysiteDomain/pagefolder/page`
You should be careful when using relative path.
Something like `![](logo.png)` should be changed it to `![](../logo.png)` since the compiled page will also be inside a folder `pagefolder`.
The path in the `src` attribute will be relative to the compiled page.
As an alternative, you could simply use an absolute path like for ex `![](/logo.png)`. The path in the `src` attribute will be relative to the compiled root of the site.
Upvotes: 0 <issue_comment>username_8: Do NOT put `public` into `/src`!
In my case, I had a `src` dir into which I put my `pages` etc., which is an option described [here](https://nextjs.org/docs/advanced-features/src-directory). But I ALSO accidentally moved the `public` dir there. This will mess nextjs up -- you need to keep `public` in the root dir.
Upvotes: 0 <issue_comment>username_9: After running `next build && next export` and your images are not visible do this:
```
// next.config.js
/** @type {import('next').NextConfig} */
module.exports = {
reactStrictMode: true,
images: {
loader: "custom",
loaderFile: "./imageLoader.js",
},
assetPrefix: "./",
env: {
// dev does not need static path
ROOTDIR: process.env.NODE_ENV === "development" ? "" : "file:///C:/Users/.../out",
},
};
```
Create an imageLoader.js too in the root project
```
export default function imageLoader({ src, width, quality }) {
return `process.env.NODE_ENV === "development" ? "" : "file:///C:/Users/.../out${src}?w=${width}?q=${quality || 75}`;
}
```
Where *file:///C:/Users/.../out* refers to full path to the root of your build
Now you can append process.env.ROOT before "/\*"
Upvotes: -1 <issue_comment>username_10: Using the Image tag like below worked for me
```
import Image from 'next/image'
```
Upvotes: 0 <issue_comment>username_11: I have faced the same problem the point which we should check while using Next js.
1. Image should be available
2. Image should be present in public directory.
3. Public directory should be present in those folder in which node module is present
The thing which I did wrong is I placed the folder in public due to which my image was not showing
Upvotes: -1 |
2018/03/22 | 1,009 | 3,388 | <issue_start>username_0: I'm using the official stable ZooKeeper Helm chart for Kubernetes [which pulls a ZooKeeper Docker image](https://github.com/kubernetes/charts/blob/b40c8c395d8acfa428a865d8aeb9c607e0cce69c/incubator/zookeeper/templates/statefulset.yaml#L39) from Google's sample images on Google Container Registry.
That ZooKeeper image is available [here](https://console.cloud.google.com/gcr/images/google-samples/GLOBAL/k8szk@sha256:32212dd754b6280ac6c96b615605300f1f060baad1fdf68abd370d2ffb07ae47/details/info?tag=v2), however, I can't seem to find any reference to the Dockerfile for how it is built or if its Dockerfile is generated from some other representation (e.g., [via Bazel](https://github.com/bazelbuild/rules_docker)). I'd like to know info like what else is installed on the image, what OS it's based on, etc.
In general are Dockerfiles for the Google sample images publicly hosted on GCR available?
For the ZooKeeper image specifically, I'd like to determine how it compares to [Confluent's ZooKeeper image](https://hub.docker.com/r/confluentinc/cp-zookeeper/): is it similar? Does it bundle something extra for running ZooKeeper on top of Kubernetes? etc
So far I've done quite a bit of Googling, read through the [Google Container Registry docs](https://cloud.google.com/container-registry/docs/), poked around the [Google org on GitHub](https://github.com/google), and [searched Stack Overflow](https://stackoverflow.com/search?q=google%20container%20registry%20dockerfile) but haven't been able to locate this info.<issue_comment>username_1: For actually seeing the difference between the images, I wouldn't trust the Dockerfile. There's no way to guarantee that a given image was produced by a given Dockerfile, since they're not reproducible.
Have you looked at [container-diff](https://github.com/GoogleCloudPlatform/container-diff)?
```
$ container-diff diff confluentinc/cp-zookeeper gcr.io/google-samples/k8szk:v2
```
If you want something more lightweight (and you trust the image producer) you can glean some information from the config file `"history"`, which has entries that roughly map to the original Dockerfile.
For [gcr.io/google-samples/k8szk:v2](https://gcr.io/google-samples/k8szk:v2), you can do this:
```
$ curl -L https://gcr.io/v2/google-samples/k8szk/blobs/sha256:2fd25e05d6e2046dc454f57e444214756b3ae459909d27d40a70258c98161737 | jq .
```
(That just downloads the config blob. You can find the config digest in the manifest file.)
For images produced by [bazelbuild/rules\_docker](https://github.com/bazelbuild/rules_docker), it will just have "bazel build ..." for each entry, which isn't very useful to you :)
If you want to find the base image, I've had a surprising amount of success just Googling the sha256 digest of the first entry in the manifests's `"layers"`.
For the zookeeper image in particular, it looks like it might be based on `ubuntu:xenial-20161213`.
Upvotes: 0 <issue_comment>username_2: Please do not use images from `gcr.io/google-samples` for production use.
These images are used solely for GKE tutorials on cloud.google.com and they are not actively maintained, in the sense that we don't rebuild them for security vulnerabilities for the components on the images etc.
Source codes for some of the images are at <https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/>.
Upvotes: 2 |
2018/03/22 | 669 | 2,601 | <issue_start>username_0: Hi I'm looking at ways to specify the widths of a large number of objects on a page AND have each object's width displayed within it as text. The main aim is to avoid having a reference to the width anywhere (whether in the HTML, CSS or JS) more than once but I need potentially thousands of these objects on one page (currently I specify the width of the div and a text within it - too inefficient!).
So far I have this: <https://jsfiddle.net/ghostfood/d6acdhq6/17/>
```
This one is
This one is
This one is
function myFunction() {
var x1 = document.getElementById("object1").style.width;
var x2 = document.getElementById("object2").style.width;
var x3 = document.getElementById("object3").style.width;
document.getElementById("percentage1").innerHTML = x1;
document.getElementById("percentage2").innerHTML = x2;
document.getElementById("percentage3").innerHTML = x3;
}
```
The width must be a percentage but ideally would not include the percentage symbol in the displayed text (not sure how it's doing that as this is an example I found online then modified a bit - I do not know JS very well).
I've looked at D3 and amcharts for this briefly but I'm not sure they're best for handling hundreds of small stacked bar charts on one page and with lots of CSS control which is what I need. I may well be wrong!
Summary: Help me figure out a more efficient way of getting and displaying the (percentage) width (as set manually in HTML or JS and within a range of 10% to 100%) of an object within it as text (the caveat being that I need to do this for thousands of small objects on one page).<issue_comment>username_1: The problem with the JS is that you were referencing to `object1` but the name of the is `object`.
When a browser encounters an error, the execution of the script stops. That means that none of your code was running because the error was on the first line (of the function code).
Upvotes: 0 <issue_comment>username_2: Set a common class to all divs that you want to get the width.
Select all of then with `getElementsByClassName()`
Loop through each one getting its width.
find the `children` `span` and add the string to it.
See below
```js
function myFunction() {
var elements = document.getElementsByClassName("voteshare");
for (var i = 0; i < elements.length; i++){
var thisElement = elements[i];
var thisWidth = thisElement.style.width.toString();
thisElement.children[0].textContent += thisWidth;
}
}
```
```html
This one is
This one is
This one is
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 600 | 2,031 | <issue_start>username_0: I'm using Visual Studio 2017 Professional. When I build, I am getting a lot of TypeScript-related errors even though I don't have any typescript in my solution. The main one is:
`The target "TypeScriptClean" does not exist in the project.`
However, there are no references to anything called "TypeScriptClean" - or typescript at all - in my solution.
I also get:
`The target "CompileTypeScriptWithTSConfig" does not exist in the project.`
Both these errors are occurring in the file Microsoft.Common.CurrentVersion.targets.
Anyone know what is going on here? I basically can't build anything in Visual Studio.<issue_comment>username_1: in visual studio 2017
changing the typescript version to `Use latest available` on the project properties fixed the issue
[![use latest](https://i.stack.imgur.com/R1I7i.png)](https://i.stack.imgur.com/R1I7i.png)
Upvotes: 2 <issue_comment>username_2: This is how i solve the problem.
* >
> 1) Create a new cordova project.
>
>
>
* >
> 2) Install all of the plugins that are installed in your current app.
>
>
>
* >
> 3) Delete the **www** folder from your NEW project.
>
>
>
* >
> 4) Copy the **www** folder from your OLD project and place it in the NEW project.
>
>
>
* >
> 4) Clean the solution (from the build menu)
>
>
>
* >
> 5) Now finally... build your app!
>
>
>
This always works for me when ever i run into random system issues.
\*Remember, if you installed any 3rd party plugins you may need to re-install them before your app is fully functional again.
Good Luck!
Upvotes: 0 <issue_comment>username_3: I added this to my .jsproj file and it worked. at least it gets past this issue and my project builds again. It's just an empty target, since I do not use typescript.
```
```
I do not see where the typescript version is located for my older cordova project.
Upvotes: 3 <issue_comment>username_4: In my case, I had to make sure I'm debugging/running the project as "Debug x64" and not "Debug Mixed Platforms".
Upvotes: 0 |
2018/03/22 | 645 | 2,042 | <issue_start>username_0: Is there any nice way to validate that all items in a dataframe's column have a valid date format?
My date format is `11-Aug-2010`.
I saw this generic answer, where:
```
try:
datetime.datetime.strptime(date_text, '%Y-%m-%d')
except ValueError:
raise ValueError("Incorrect data format, should be YYYY-MM-DD")
```
source: <https://stackoverflow.com/a/16870699/1374488>
But I assume that's not good (efficient) in my case.
I assume I have to modify the strings to be pandas dates first as mentioned here:
[Convert string date time to pandas datetime](https://stackoverflow.com/questions/41501726/convert-string-date-time-to-pandas-datetime)
I am new to the Python world, any ideas appreciated.<issue_comment>username_1: (format borrowed from username_2's answer)
```
if pd.to_datetime(df['date'], format='%d-%b-%Y', errors='coerce').notnull().all():
# do something
```
This is the LYBL—"Look Before You Leap" approach. This will return `True` assuming all your date strings are valid - meaning they are all converted into actual `pd.Timestamp` objects. Invalid date strings are coerced to `NaT`, which is the datetime equivalent of `NaN`.
Alternatively,
```
try:
pd.to_datetime(df['date'], format='%d-%b-%Y', errors='raise')
# do something
except ValueError:
pass
```
This is the EAFP—"Easier to Ask Forgiveness than Permission" approach, a `ValueError` is raised when invalid date strings are encountered.
Upvotes: 6 [selected_answer]<issue_comment>username_2: If you know your format, you can use boolean slicing
```
mask = pd.to_datetime(df.columns, format='%d-%b-%Y', errors='coerce').notna()
df.loc[:, mask]
```
Consider the dataframe `df`
```
df = pd.DataFrame(1, range(1), ['11-Aug-2010', 'August2010, I think', 1])
df
11-Aug-2010 August2010, I think 1
0 1 1 1
```
I can filter with
```
mask = pd.to_datetime(df.columns, format='%d-%b-%Y', errors='coerce').notna()
df.loc[:, mask]
11-Aug-2010
0 1
```
Upvotes: 3 |
2018/03/22 | 684 | 2,493 | <issue_start>username_0: I have a React / Mobex application written in TypeScript, built by Webpack 1. After updating TypeScript version from 2.3.4 to 2.4.2 i get an error
```
ERROR in C:\myproject\tsconfig.json
error TS2688: Cannot find type definition file for 'reflect-metadata'.
```
I also tried Typescript 2.7.2, same error. I've tried explicitly providing the path to its typings in `"paths"`, tried installing the latest version of `'reflect-metadata'`, including globally - still same error.
**tsconfig.json**
```
{
"compilerOptions": {
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"jsx": "react",
"lib": ["dom", "es2015.promise", "es6"],
"module": "commonjs",
"moduleResolution": "node",
"noImplicitAny": true,
//"noUnusedLocals": true,
"noUnusedParameters": true,
"outDir": "./dist/",
"sourceMap": true,
"suppressImplicitAnyIndexErrors": true,
"target": "es5",
"types": ["reflect-metadata"],
"baseUrl": ".",
"paths": {
"react-split-pane": ["./type_fixes/react-split-pane/index.d.ts"],
"react-dropzone": ["./type_fixes/react-dropzone/index.d.ts"],
"react-bootstrap-toggle": ["./type_fixes/react-bootstrap-toggle/index.d.ts"]
}
},
"include": [
"./src/**/*",
"./index.tsx",
"./declarations.d.ts"
],
"exclude": [
"node_modules"
]
}
```
**package.json**
```
"dependencies": {
//...
"reflect-metadata": "0.1.10",
//...
},
"devDependencies": {
//...
"typescript": "2.4.2",
//...
},
```<issue_comment>username_1: Try to remove `reflect-metadata` from `types` section. You can import it as usual package:
```
import 'reflect-metadata'; // Just import to make visible Reflect
Reflect.defineMetadata(...)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Just for reference trying to solve the same error message when wanting **reflect-metadata** without import :)
```
npm i --save-dev @types/reflect-metadata
```
Solved it even though the message from installation says types are included with the **reflect-metadata** package.
Upvotes: 2 <issue_comment>username_3: On my case, this happened when I upgraded typescript to version 5.
I had **reflect-metadata** inside tsconfig.json
```
"types": ["reflect-metadata"]
```
Just removed it solved, my problem.
Upvotes: 0 |
2018/03/22 | 438 | 1,667 | <issue_start>username_0: i want to make a function that returns a token from a server but `http.post()` gets a response after my function returned the token.
How do i wait for `http.post()` before returning the token.
My code:
```
import { Injectable } from '@angular/core';
import { Http } from '@angular/http';
@Injectable()
export class ServerConnectionService{
constructor( private http : Http) { }
token : string;
Login(Password : string, Username : string, ServerRootURL : string) : string
{
let url = ServerRootURL + "api/AdminApp/RegisterToken";
this.http.post(url, { "Username": Username, "Password": <PASSWORD> }).toPromise()
.then(res => this.token = res.json())
.catch(msg => console.log('Error: ' + msg.status + ' ' + msg.statusText))
return this.token;
}
}
```
Thank you in advance.<issue_comment>username_1: Try to remove `reflect-metadata` from `types` section. You can import it as usual package:
```
import 'reflect-metadata'; // Just import to make visible Reflect
Reflect.defineMetadata(...)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Just for reference trying to solve the same error message when wanting **reflect-metadata** without import :)
```
npm i --save-dev @types/reflect-metadata
```
Solved it even though the message from installation says types are included with the **reflect-metadata** package.
Upvotes: 2 <issue_comment>username_3: On my case, this happened when I upgraded typescript to version 5.
I had **reflect-metadata** inside tsconfig.json
```
"types": ["reflect-metadata"]
```
Just removed it solved, my problem.
Upvotes: 0 |
2018/03/22 | 645 | 2,718 | <issue_start>username_0: I've splitted my app into two modules: one with main basic functionality and other with less-used features like account settings, faq pages and more.
What I'm trying to accomplish is to lazy load the second module for some root route paths, like `/account` or `/settings` without having to create many different modules. As far as I know Angular lazy load only works with one root route, and the routes configured in the lazy loaded module are set as children of that route.
```
{
path: 'account',
loadChildren: './modules/settings/settings.module#SettingsModule',
},
{
path: 'settings',
loadChildren: './modules/settings/settings.module#SettingsModule',
},
```<issue_comment>username_1: To create an instance of a component in a lazy loaded module without the router, this snippet could help:
```
class LazyLoader {
constructor(private _injector: Injector,
private _moduleLoader: NgModuleFactoryLoader) {
}
public loadLazyModule() {
this._moduleLoader.load('./modules/settings/settings.module#SettingsModule')
.then((moduleFactory: NgModuleFactory) => {
const moduleRef = moduleFactory.create(this.\_injector);
// Here you need a way to reference the class of the component you want to lazy load
const componentType = (moduleFactory.moduleType as any).COMPONENT\_CLASS;
const compFactory = moduleRef.componentFactoryResolver.resolveComponentFactory(componentType);
const componentRef = container.createComponent(compFactory);
// Instance of the lazy loaded component
componentRef.instance
})
}
}
```
Upvotes: 0 <issue_comment>username_2: The truth is that, unless you do something dynamically loaded, like in the answer above, you cannot do this. Angular is not intended to work, out of the box, with this type of structure. This is a common problem and is already being addressed for future releases, with the use of standalone components (see <https://netbasal.com/aim-to-future-proof-your-standalone-angular-components-accb574d273f>)
So, for your case, I think the best approach would be break your main module into core, and shared modules, and create a separate module for each feature module (account settings, faq pages, ...).
Upvotes: 0 <issue_comment>username_3: The Angular team is working on standalone components, that could potentially make loading some rarely used parts less painful:
<https://blog.angular.io/an-update-on-standalone-components-ea53b4d55214>
One easy (although not 100% optimal) way that is good enough for a lot of requirements probably is using a single shared module - that way your extra overhead is "just" to create another feature module
Upvotes: 1 |
2018/03/22 | 1,113 | 4,336 | <issue_start>username_0: I'm trying to get into Angular4/5. I have a connection to a CMS service called Sanity (<https://www.sanity.io/>). The issue I am having is that this service is injected to a ProductsComponent class and I call methods in that service at the component's ngOnInit method. However I am getting the following error when I try to recompile using ng serve command:
```
ERROR in src/app/sanity.service.ts(21,14): error TS2339: Property 'fetch' does not exist on type 'object'.
```
Here is the component where I inject the service:
```
import { Component, OnInit } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { Query } from '@angular/core/src/metadata/di';
import { SanityService } from '../sanity.service';
@Component({
selector: 'app-product',
templateUrl: './products.component.html',
styleUrls: ['./products.component.css']
})
export class ProductsComponent implements OnInit {
products: object[];
constructor(private http: HttpClient, private sanityService: SanityService) { }
ngOnInit() {
this.sanityService.getProducts((res) => {
this.products = res;
});
}
}
```
here is the service code (sanity.service.ts):
```
import { Injectable } from '@angular/core';
//const sanityClient = require('@sanity/client');
import * as SanityClient from '@sanity/client';
@Injectable()
export class SanityService {
sanityClient: object;
constructor() {
this.sanityClient = SanityClient({
projectId: 'someidhere',
dataset: 'somedatabase',
useCdn: true
});
}
getProducts(callback) {
let query = "*[_type == 'product']{ name, _id, description, price, 'imageUrl': image.asset->url }";
this.sanityClient
.fetch(
query, // Query
{} // Params (optional)
).then(res => {
//console.log('5 movies: ', res)
//this.products = res;
callback(res);
}).catch(err => {
console.error('Oh no, error occured: ', err)
});
}
getProductById(id, callback) {
var query = `*[_id == '${id}']{ name, _id, description, price, 'imageUrl': image.asset->url }`;
this.sanityClient
.fetch(
query, // Query
{} // Params (optional)
).then(res => {
//console.log('PRODUCT: ', res)
//this.products = res;
callback(res);
}).catch(err => {
console.error('Oh no, error occured: ', err)
});
}
}
```
My suspicion is that the object being created/set in the service's constructor is yet to finish 'loading' when the 'getProducts()' function is called and thus, the 'fetch' property does not exist error.
I'm really stumped here...can someone help me out?
Also here is my app.component.ts just in case:
```
import { Component } from '@angular/core';
import { SanityService } from './sanity.service';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css'],
providers: [SanityService]
})
export class AppComponent {
title = 'app';
constructor(private sanityService: SanityService) {}
}
```<issue_comment>username_1: Try to type your SanityClient as `any` or, if you have typings included, as `SanityClient`. Typing as an object tells TypeScript to treat it as a plain object without any additional function, therefore this error is emitted!
So changing `sanityClient: object` to `sanityClient: any` should allow your application to compile.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can create Interface or a class Model which contains `fetch` method.
With interface:
```
export interface Fetch {
fetch
}
```
Or with model:
```
export class Fetch {
fetch
}
```
So after creating a class model for example, import it and do this stuff:
```
sanityClient: Fetch;
```
the difference between the two can be found [here](https://angular.io/guide/styleguide#interfaces).
>
> Consider using a class instead of an interface for services and
> declarables (components, directives, and pipes).
>
>
> Consider using an interface for data models.
>
>
>
Upvotes: 0 |
2018/03/22 | 659 | 2,194 | <issue_start>username_0: I have a code:
```
$orders = Order::all();
$allorders = $orders->count();
$deliveryQuery = $orders->where('status', '=', '8')->select(DB::raw('AVG(updated_at) as order_average'))->first();
```
I get error:
`Method select does not exist.`
How I can fix it? My code is working, if I do:
```
$deliveryQuery = Order::where('status', '=', '8')->select(DB::raw('AVG(updated_at) as order_average'))->first();
```
But this not good, I want 1 query, but not 2..
**UPDATE:**
```
$orders->where('status', '=', '8')->avg('updated_at')->first();
```
I can use this code? But it not working..Get error:
`Object of class Illuminate\Support\Carbon could not be converted to int`<issue_comment>username_1: The all() method return a [Collection](https://laravel.com/docs/5.6/eloquent-collections). You can then use Collection [methods](https://laravel.com/docs/5.6/collections#available-methods) on the result, but where() and select() are methods of the [QueryBuilder](https://laravel.com/docs/5.6/queries) class.
```
$query = Order::query(); // You can get the query builder this way.
$orders = Order::all(); // equivalent to $query->get(); => return a Collection
$orderCount = Order::count(); // equivalent to $query->count();
$orderCount = $orders->count(); // here $orders is a Collection and the count() method is from the Collection class
```
When you call avg(), you're calling it from the Collection class. But it can only work on number, and updated\_at attribute is parsed to a Carbon date by Laravel.
Your code could be :
```
$query = Order::query();
$orders = $query->get();
$allorders = $query->count();
$deliveryQuery = $query->where('status', '=', '8')->select(DB::raw('AVG(updated_at) as order_average'))->first();
```
Upvotes: 1 <issue_comment>username_2: You can try this..
```
$query = Order::query();
$orders = clone $query;
$orders = $orders->all();
$allorders = $orders->count();
$deliveryQuery = $query->where('status', '=', '8')->select(DB::raw('AVG(updated_at) as order_average'))->first();
```
similar question can be found [here](https://stackoverflow.com/questions/27625411/laravel-cloning-query-string)
Upvotes: 1 [selected_answer] |
2018/03/22 | 677 | 2,851 | <issue_start>username_0: I am struggle for a stable answer for this question and not getting any. My doubts are
1. do we need to store the user name and password in the token and if yes then how that i.e where this data are getting store in the payload part is it in the `sub`?
2. do we need to store the token in the DB while registering
3. Does Jwt token are unique for same set of data ( I think no cause of the different time)
4. how to verify user? that is first creating a token of the data from the inputs then
creating token and verifying it with the token in the DB?
5. How to logout?
6. Is it better than session<issue_comment>username_1: 1) You need to store some user identification in JWT. Usually it makes sense to list her granted rights verified during authentication and something like display name. Definitely do not store password.
2) No, token is not stored in the database. Tokens are short lived and need to be re-issued every few minutes transparently to user.
3) Every time JWT is re-issued it is unique because one of the things encoded in it is the timestamp for when it expires.
4) First token is created during authentication. Then each request validates the token by decoding it using the private key you used to encode it. If the token is expiring soon you issue the new one using the same data + updated expiration timestamp.
5) Log out is now a front end's job. You need to stop sending requests with the token. Perhaps delete the cookie if you are sending JWT as a cookie.
6) This is better than using session because it is stateless. First obvious win is that you no longer need to store session info in database/maintain client ip address affinity if you are running a cluster of multiple web servers.
Upvotes: 4 [selected_answer]<issue_comment>username_2: In addition to MKs answer (to which I agree) and specifically to your questions 1 and 4:
The password is only used in the first request to obtain the accesss token and of course never part of the token itself.
When you request the access token for the first time, you usually start by sending a token request to the token endpoint, in case of the so called [`Resource Owner Password Credentials Grant`](https://auth0.com/docs/api-auth/tutorials/password-grant) with user credentials in the request header, e.g.
```
grant_type=password&username=user1&passowrd=<PASSWORD>
```
The authorization server (which might be a different endpoint on your resource server) will check the credentials and create an access token which will be used on all subsequent calls to the resource server. The resource server just checks the validity of the token, i.e checks if the signature matches the content.
For reference : JWT is described in [RFC 7519](https://www.rfc-editor.org/rfc/rfc7519)
General introduction can be found on <https://jwt.io>
Upvotes: 1 |
2018/03/22 | 1,289 | 3,484 | <issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/vfkgu.png)](https://i.stack.imgur.com/vfkgu.png)My code is displaying two errors. As I'm learning, I've tried several ways to fix it. Please, can anyone help me ?
I need to insert a number and a string in my stack. But you're making a mistake in my structure.
As I'm still a beginner, I'm having trouble seeing in general, but I think the way is right, maybe it's a syntax error.
>
> Ligacacao \*topo - [Error] unknown type name 'Ligacacao'
>
>
> Ligacacao *pnovo = (Ligacacao*)malloc(sizeof(Ligacacao)); - [Error] 'Ligacacao' undeclared (first use in this function)
>
>
>
```
#include
#include
#include
#include
#include
#define MAX 5
typedef struct Ligacacao {
char hora[MAX];
int numero;
struct Ligacacao \*prox;
};
char hora[MAX];
int numero;
Ligacacao \*topo; //ERRRRRRO
void dados\_ligacao() {
printf("\nEntre com a hora da chamada: ");
fflush(stdin);
fgets(hora, MAX, stdin);
printf("Entre com o numero do telefone: ");
fflush(stdin);
scanf("%d", №);
}
void push\_ligacao() {
dados\_ligacao();
Ligacacao \*pnovo = (Ligacacao\*)malloc(sizeof(Ligacacao));//ERRRRRRO
strcpy(pnovo->hora, hora);
pnovo->numero = numero;
pnovo->prox = NULL;
if (topo == NULL)//se a pilha estiver vazia
topo = pnovo; //topo recebe o novo elemento
else {
pnovo->prox = topo;
topo = pnovo;
}
}
void pop\_ligacao() {
Ligacacao \*aux;
if (topo == NULL) {
printf("\n\nErro, Sem ligacoes.\n\n");
return;
}
else {
aux = topo;
topo = topo->prox;
free(aux);
}
}
void listar\_ligacao() {
Ligacacao \*aux;
aux = topo;
while (aux != NULL) {
printf("\t\t\tDados Ligacao\n\n");
printf("Numero: %d", aux->numero);
aux = aux->prox;
}
}
int main() {
char op;
topo = NULL;
do {
system("cls");
printf("\t\t\Ligacoes");
printf("\n\n(E)mpilhar Ligacacao\n");
printf("(L)istar Estoque Ligacacaos\n");
printf("(D)esempilhar Ligacacao\n");
printf("(S)air do Programa\n\n");
printf("Digite a opcao: ");
op = toupper(getche());
switch (op) {
case'E': push\_ligacao();
break;
case'L': listar\_ligacao();
break;
case'D': pop\_ligacao();
break;
case'S': exit(0);
default: printf("\n\nOpcao invalida, digite novamente.\n\n");
}
system("PAUSE");
} while (op != 'S');
return (0);
}
```<issue_comment>username_1: I think you are typdeffing your struct incorrectly. Maybe try this:
```
typedef struct Ligacacao {
char hora[MAX];
int numero;
struct Ligacacao *prox;
} Ligacacao;
```
Also, you might find a problem with this: `printf("\t\t\Ligacoes");`
That last backslash probably shouldn't be there, or perhaps you meant to put another 't' afterwards for another tab character.
Upvotes: 2 <issue_comment>username_2: **Error 1 (line 21):**
```
typedef struct Ligacacao {
char hora[MAX];
int numero;
struct Ligacacao *prox;
};
```
Should be:
```
typedef struct {
char hora[MAX];
int numero;
struct Ligacacao *prox;
}Ligacacao;
```
**Error 2 (line 39):**
```
pnovo->prox = topo;
```
should probably be:
```
pnovo->prox = topo->prox;
```
**Error 3 (line 52):**
```
topo = topo->prox;
```
Is wrong but can't be assigned to itself either:
```
topo->prox= topo->prox;
```
**Error 4 (line 63):**
```
aux = aux->prox;
```
Same here.
**Error 5 (line 72):**
```
printf("\t\t\Ligacoes");
```
Should either be:
```
printf("\t\t\tLigacoes");
```
or:
```
printf("\t\tLigacoes");
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,050 | 3,637 | <issue_start>username_0: i am trying to trigger on submit event on a form and validate it. two things are not working in my code the first one is that the onsubmit event is not triggered, the second one is that input field of type email is not validated i used html5 input type email and required but still to no a vale. i do not know what am i doing wrong. here is the code:
```
Learn PHP CodeIgniter Framework with AJAX and Bootstrap
### Contacts
Add Contact
| | first name | last name | phone | email | Action |
| --- | --- | --- | --- | --- | --- |
php foreach($contacts as $contact){?| php echo $contact-id;?> | php echo $contact-first\_name;?> | php echo $contact-last\_name;?> | php echo $contact-phone;?> | php echo $contact-email;?> |
|
php }?| ID | first name | last name | phone | email | Action |
| --- | --- | --- | --- | --- | --- |
$(document).ready( function () {
$('#table\_id').DataTable();
$('#form').submit(function(event) {
save();
event.preventDefault();
});
} );
var save\_method; //for save method string
var table;
function add\_contact()
{
save\_method = 'add';
$('#form')[0].reset(); // reset form on modals
$('#modal\_form').modal('show'); // show bootstrap modal
//$('.modal-title').text('Add Person'); // Set Title to Bootstrap modal title
}
function edit\_contact(id)
{
save\_method = 'update';
$('#form')[0].reset(); // reset form on modals
//Ajax Load data from ajax
$.ajax({
url : "<?php echo site\_url('contacts/ajax\_edit/')?>/" + id,
type: "GET",
dataType: "JSON",
success: function(data)
{
$('[name="id"]').val(data.id);
$('[name="first\_name"]').val(data.first\_name);
$('[name="last\_name"]').val(data.last\_name);
$('[name="phone"]').val(data.phone);
$('[name="email"]').val(data.email);
$('#modal\_form').modal('show'); // show bootstrap modal when complete loaded
$('.modal-title').text('Edit Contact'); // Set title to Bootstrap modal title
},
error: function (jqXHR, textStatus, errorThrown)
{
alert('Error get data from ajax');
}
});
}
function save()
{
console.log("submitting");
var url;
if(save\_method == 'add')
{
url = "<?php echo site\_url('contacts/contact\_add')?>";
}
else
{
url = "<?php echo site\_url('contacts/contact\_update')?>";
}
if($('#form')[0].checkValidity())
// ajax adding data to database
$.ajax({
url : url,
type: "POST",
data: $("#form").serialize(),
dataType: "JSON",
//contentType: 'application/json; charset=utf-8',
success: function(data)
{
//console.log(url);
//if success close modal and reload ajax table
$('#modal\_form').modal('hide');
location.reload();// for reload a page
},
error: function (jqXHR, textStatus, errorThrown)
{
alert('Error adding / update data');
}
});
else
{
return false;
}
}
function delete\_contact(id)
{
if(confirm('Are you sure delete this data?'))
{
// ajax delete data from database
$.ajax({
url : "<?php echo site\_url('contacts/contact\_delete')?>/"+id,
type: "POST",
dataType: "JSON",
success: function(data)
{
location.reload();
},
error: function (jqXHR, textStatus, errorThrown)
{
console.log(errorThrown);
alert('Error deleting data');
}
});
}
}
×
### Contact Form
first name
last name
phone
email
Cancel
```<issue_comment>username_1: you have a form in modal box which you want to submit.
Upvotes: 0 <issue_comment>username_2: You need your submit button to be within the form element. Right now it is placed outside of the form.
```
Cancel
```
[Example](https://jsfiddle.net/867frp7m/11/). Note that the PHP won't work, but you'll see the form submit.
Upvotes: 2 [selected_answer] |
2018/03/22 | 323 | 1,174 | <issue_start>username_0: So I have an init function defined something like this:
```
init: function() {
$("#editRow").click(function() {
`}
$(".removeRow").click(function() {
`}
}``
```
1. So, I was wondering if there was any way for me to call the class method `removeRow` in the onclick event itself? Essentially what I want to do is call the class method `removeRow` when my button gets clicked.
```
var d = '[*delete*](javascript:void(0))';
```
And whenever I click the button I get an error saying that `removeRow()` is not defined. So I am trying to figure out a way to call removeRow from pressing the button.
2. If I wanted to call `editRow` in a different onclick event, would I do it the same way as `removeRow` or would that require a different approach?
Thanks for the help!<issue_comment>username_1: you have a form in modal box which you want to submit.
Upvotes: 0 <issue_comment>username_2: You need your submit button to be within the form element. Right now it is placed outside of the form.
```
Cancel
```
[Example](https://jsfiddle.net/867frp7m/11/). Note that the PHP won't work, but you'll see the form submit.
Upvotes: 2 [selected_answer] |
2018/03/22 | 209 | 773 | <issue_start>username_0: I'm running the latest public Corda demobench on linux(CentOS7) and when I attempt to start the first node, I get the following:
>
> terminate called after throwing an instance of 'std::bad\_cast'
>
> what(): std::bad\_cast
>
>
>
I was not getting this before I ran a "yum upgrade".
Any help is appreciated.
Regards,
Rob<issue_comment>username_1: Looks like your distro has broken backwards compatibility. I suggest attaching gdb to the java process and waiting for the exception, then obtaining a backtrace to figure out where the issue is. DemoBench is written in Java/Kotlin so the fault lies outside that code.
Upvotes: 1 <issue_comment>username_2: I upgraded to Oracle jdk8u162 and the issue went away.
Upvotes: 1 [selected_answer] |
2018/03/22 | 250 | 915 | <issue_start>username_0: Okay so I'm new to C++ and I just wanted to ask why you shouldn't pass the char array by reference with the "&" sign but you should with strings since both arguments are pointers. Example code I have written:
```
void changeChar(char* buffer, int bSize) {
strcpy_s(buffer, bSize, "test123");
}
void changeString(string* buffer) {
*buffer = "test321";
}
char mychar[10] = "hello world";
string mystr;
changeChar(mychar, sizeof(mychar));
changeString(&mystr);
```<issue_comment>username_1: Looks like your distro has broken backwards compatibility. I suggest attaching gdb to the java process and waiting for the exception, then obtaining a backtrace to figure out where the issue is. DemoBench is written in Java/Kotlin so the fault lies outside that code.
Upvotes: 1 <issue_comment>username_2: I upgraded to Oracle jdk8u162 and the issue went away.
Upvotes: 1 [selected_answer] |
2018/03/22 | 1,092 | 3,840 | <issue_start>username_0: So here is my page(PlayGame). Generated from my controller action as:
```
// GET: Games/PlayGame/Id
[AllowAnonymous]
public ActionResult PlayGame(Game game)
{
return View(game);
}
```
On the PlayGame page I would like to do a action defined in my javascript. Here is the the .cshtml page for PlayGame:
```
@model CapstoneApplication.Models.Game
@Scripts.Render("~/Scripts/RouletteWheel.js")
```
This action is supposed to reference a javascript file action:
```
function play(gameId) {
window.location.href = '@Url.Action("PlayGameRound", "Games")';
}
```
When this action is invoked I want to navigate to the PlayGameRound page and pass a gameId and category from the .js file.
```
//POST: Games/PlayGameRound/GameId/Category
[HttpPost]
[ValidateAntiForgeryToken]
public ActionResult PlayGameRound(int gameId, string category)
{
//TODO: Get the selected category and shoot a random question from
the category
var random = new Random();
var game = this.db.Games.Find(gameId);
var questionIndex = random.Next(0, game.Round.Questions.Count);
var question = game.Round.Questions.ElementAt(questionIndex);
if (category.Equals("Crown"))
{
}
while (question.Category.CategoryName != category)
{
questionIndex = random.Next(0, game.Round.Questions.Count);
question = game.Round.Questions.ElementAt(questionIndex);
}
return View(game);
}
```
For some reason when this function in the .js gets called:
```
function play(gameId) {
window.location.href = '@Url.Action("PlayGameRound", "Games")';
}
```
It returns me this page:
[404 not found](https://i.stack.imgur.com/PTHrX.png)
and the Url returned is /Games/PlayGame/@Url.Action(%22PlayGameRound%22,%20%22Games%22) but
I am trying to request something like: /Games/PlayGameRound/GameId/Category
Please Help.<issue_comment>username_1: As I stated, you can't use Razor syntax in the javascript file, it can only be used in the View files, so change your javascript line of code from this:
```
function play(gameId) {
window.location.href = '@Url.Action("PlayGameRound", "Games")';
}
```
To this:
```
function play(gameId) {
window.location.href = '/Games/PlayGameRound';
}
```
Upvotes: 0 <issue_comment>username_2: The `Url.Action` method is a c# method call and you cannot use c# in a seperate js file, but you can if it is in the razor view.
What you can do is add another parameter that can take the url as well and use it to redirect like:
```
onclick="play(@Model.Id,'@Url.Action("PlayGameRound", "Games")')"
```
and then in your function use it:
```
function play(gameId,url) {
window.location.href = url;
}
```
or another alternative is to use `data-` attribute like:
```
```
and in the function in js:
```
function play(element) {
var ModelID = $(element).data("id"); // ModelId
window.location.href = $(element).data("url"); // url
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: You can't put razor syntax inside a js file. Another way around this is to create global object in the view and reference the same object inside your js file.
@model CapstoneApplication.Models.Game
```
var globalObj = {
myRoute: '@Url.Action("PlayGameRound", "Games")';
}
```
And inside your js file
```
function play(gameId) {
window.location.href = window.myRoute;
}
```
Upvotes: 0 <issue_comment>username_4: You can’t use Razor syntax in JavaScript file as it was said. You have to add the url as a parameter of your JavaScript function so you can use the Razor syntax to get it in your HTML.
AND
You put `[HttpPost]` on your controller method and you want to get it in « Get », it’s not possible.
Upvotes: 0 |
2018/03/22 | 1,481 | 4,315 | <issue_start>username_0: I have the following nested dict:
```
ex_dict = {'path1':
{'$variable1': '2018-01-01',
'$variable2': '2020-01-01',
'$variable3': '$variable1',
'$variable4': '$variable3'},
'path2':
{'$variable1': '2018-01-01',
'$variable2': '2020-01-01',
'$variable3': '$variable1',
'$variable4': '$variable1 + $variable2'}
}
```
I want to replace any $variableX specified for a dict key with the dict value from another key if the key from the other dict value **if** found in the value of the original dict key. See example output below:
```
{'path1':
{'$variable1': '2018-01-01',
'$variable2': '2020-01-01',
'$variable3': '2018-01-01', # Substituted with value from key:variable1
'$variable4': '2018-01-01'}, # Substituted with value from key:variable3 (after variable3 was substituted with variable1)
'path2':
{'$variable1': '2018-01-01',
'$variable2': '2020-01-01',
'$variable3': '2018-01-01', # Substituted with value from key:variable1
'$variable4': '2018-01-01 + 2020-01-01'} # Substituted with value from key:variable3 (after variable3 was substituted with variable1) and key:variable2
}
```
Does anyone have any suggestions?<issue_comment>username_1: You could do replacement by recursively walking through the dict and using the `re` library to do the replacements
```
import re
def process_dict(d):
reprocess = []
keys = d.keys()
while keys:
for k in keys:
v = d[k]
if isinstance(v, dict):
process_dict(v)
elif '$' in v:
d[k] = re.sub(r'\$\w+', lambda m: d[m.group(0)] if m.group(0) in d else m.group(0), v)
if '$' in d[k] and d[k] != v:
reprocess.append(k)
keys = reprocess
reprocess = []
```
Edit:
I added a reprocessing step to handle the cases where the references are chained and require multiple passes through some keys in the dictionary to fully process them.
Upvotes: 2 <issue_comment>username_2: Not a very *Pythonic* solution, but, does the trick:
```
ex_dict = {'path1':
{'$variable1': '2018-01-01',
'$variable2': '2020-01-01',
'$variable3': '$variable1',
'$variable4': '$variable3'},
'path2':
{'$variable1': '2018-01-01',
'$variable2': '2020-01-01',
'$variable3': '$variable1',
'$variable4': '$variable1 + $variable2'}
}
for path, d in ex_dict.items():
for k, v in d.items():
if v.startswith('$variable'):
try:
if '+' in v:
ex_dict[path][k] = ' + '.join(ex_dict[path][x.strip()] for x in v.split('+'))
else:
ex_dict[path][k] = ex_dict[path][v]
except KeyError:
pass
pprint(ex_dict)
```
Output:
```
{'path1': {'$variable1': '2018-01-01',
'$variable2': '2020-01-01',
'$variable3': '2018-01-01',
'$variable4': '2018-01-01'},
'path2': {'$variable1': '2018-01-01',
'$variable2': '2020-01-01',
'$variable3': '2018-01-01',
'$variable4': '2018-01-01 + 2020-01-01'}}
```
Upvotes: 1 <issue_comment>username_3: You can use a dictionary comprehension:
```
import re
dict = {'path1':
{'$variable1': '2018-01-01',
'$variable2': '2020-01-01',
'$variable3': '$variable1',
'$variable4': '$variable3'},
'path2':
{'$variable1': '2018-01-01',
'$variable2': '2020-01-01',
'$variable3': '$variable1',
'$variable4': '$variable1 + $variable2'}
}
final_data = {a:{c:d if re.findall('\d+-\d+-\d+', d) else \
re.sub('\$\w+', '{}', d).format(*[b[i] for i in re.findall('\$\w+', d)]) \
for c, d in b.items()} for a, b in dict.items()}
```
Output:
```
{'path2': {'$variable4': '2018-01-01 + 2020-01-01', '$variable2': '2020-01-01', '$variable3': '2018-01-01', '$variable1': '2018-01-01'}, 'path1': {'$variable4': '$variable1', '$variable2': '2020-01-01', '$variable3': '2018-01-01', '$variable1': '2018-01-01'}}
```
Upvotes: 0 |
2018/03/22 | 538 | 1,719 | <issue_start>username_0: I'm using flexbox to display a row of blocks. Each block is using flexbox to display a column of elements. One of the elements is an image that needs to maintain its aspect ratio. The code below is a simplified version of my code that illustrates what I'm trying to do.
In Chrome the images scale to 186px x 186px. In IE11 they display 186px x 500px. I've tried wrapping the images in another div and setting its height and/or width to 100%, but nothing works for both Chrome and IE11.
```css
section {
max-width: 600px;
margin: 0 auto;
}
.grid {
display: flex;
flex-wrap: wrap;
margin-left: -20px;
margin-top: 10px;
}
.block {
margin: 0;
flex: 1 0 150px;
margin-left: 20px;
display: flex;
flex-direction: column;
align-items: center;
}
.block img {
width: 100%;
}
```
```html
![](http://placehold.it/500)
Block title
-----------
![](http://placehold.it/500)
Block title
-----------
![](http://placehold.it/500)
Block title
-----------
```<issue_comment>username_1: It looks like adding `min-height: 1px;` to the `img` is the solution. I wish I had a good explanation for why this works. [Here's the best I could find](https://github.com/philipwalton/flexbugs/issues/75#issuecomment-161800607):
>
> ... My best guess is that the min-height forces IE to recalculate the
> height of the rendered content after all of the resizing, and that
> makes it realize that the height is different....
>
>
>
Upvotes: 4 [selected_answer]<issue_comment>username_2: Fellow stranger looking for fix, but featured on top answer didn't help.
Setting `min-width: 1px;` to the `img` may help you as well.
Upvotes: 2 |
2018/03/22 | 834 | 2,888 | <issue_start>username_0: I am trying to create an event rule that is triggered by a change in a file in S3 bucket in different AWS account. Detail description is [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatchEvents-CrossAccountEventDelivery.html)
So far the rule works fine with exact file names, but I need to make it work with filename prefixes. In the working example, the file name is an exact string in the non-working example the file name is a wildcard. Does CloudWatch Events Rule JSON pattern supports wildcards?
Working configuration:
```
{
"source": ["aws.s3"],
"account": ["1<KEY>"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject"],
"requestParameters": { "bucketName": ["mybucket"], "key": ["myfile-20180301.csv"] }
}
}
```
Non-working configuration:
```
{
"source": ["aws.s3"],
"account": ["1<KEY>"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject"],
"requestParameters": { "bucketName": ["mybucket"], "key": ["myfile-*"] }
}
}
```<issue_comment>username_1: A workaround will be to have a separate bucket where you PUT/COPY the \*.csv files and remove "key" parameter.
This way Cloud Watch will be triggered at any \*.csv file operation on that bucket.
Another this I don't know why you are seeting the key in cloud watch event pattern if the key was already set in cloud trail.
Upvotes: 0 <issue_comment>username_2: If you log [events of interest to Cloudwatch via CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html), then you can use a Cloudwatch metric filter [with wildcard matching](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html) and create a Cloudwatch Event on that filter.
Upvotes: 0 <issue_comment>username_3: I found a fancy solution for this using [Content-based filtering](https://docs.aws.amazon.com/eventbridge/latest/userguide/content-filtering-with-event-patterns.html#filtering-prefix-matching) (released in February 2020) like **prefix** for example.
So in your case, the solution should be:
```json
{
"source": ["aws.s3"],
"account": ["<KEY>"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject"],
"requestParameters": {
"bucketName": ["mybucket"],
"key": [{ "prefix": "myfile-" }]
}
}
}
```
Upvotes: 5 [selected_answer]<issue_comment>username_4: The template code gave by username_3 was not working for me, however the doc led to a solution:
```
{
"source": ["aws.s3"],
"account": ["<KEY>"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject"],
"requestParameters": {
"bucketName": ["mybucket"],
"key": [{"prefix": "myfile-*"}]
}
}
}
```
Hope it helps.
Upvotes: 0 |
2018/03/22 | 869 | 3,065 | <issue_start>username_0: I do understand the concept, where we need 2 backslashes when dealing with regex - <https://stackoverflow.com/a/1701876/72437>
The following code able to split **hello** and **world** without issue.
```
String message = "hello\nworld";
String[] result = message.split("\\n");
// hello
// world
for (String r : result) {
System.out.println(r);
}
```
However, if I use 1 backslash, it works too. (Able to split **hello** and **world** too)
```
String message = "hello\nworld";
String[] result = message.split("\n");
// hello
// world
for (String r : result) {
System.out.println(r);
}
```
I expect using only 1 backslash for regex will not work in Java. But, it works. May I know why is it so?<issue_comment>username_1: Your single backslash `\n` gets interpreted at compiler level to string as `LINEFEED` character and the `LINEFEED` character is sent into the `String.split()` method, and the java-regex engine got the `LINEFEED` character *directly*.
And in the case of double backslash `\\n` , you have the right understanding. `\\n` in source code gets compiled into `\n` while going into regex-engine; and `\n` is escape code for `LINEFEED` *internally* in java-regex engine
Upvotes: 2 <issue_comment>username_2: Java allows you to specify a newline character in 2 ways.
One is by specifying the character literal `\n`, just as you would split by a comma `,` or any other character that doesn't *need* to be regex-escaped.
But Java [does allow you to specify a special construct for a newline character](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html).
(In the "Summary of regular-expression constructs" section)
>
> `\n` The newline (line feed) character (`'\u000A'`)
>
>
>
This is a regular expression construct. This isn't the single character `\n`, this is a backslash followed by an "n" character, and the backslash would need to be escaped for Java, as you know, as `\\`.
There is nothing forcing you to use the construct `\\n` instead of the literal `\n`.
All this means that you have the option of specifying the character literal `\n` or using the regular expression construct -- 2 characters -- `\\n`.
The construct has the advantage of being printable, in case you would ever want to print the pattern you're splitting by.
```
System.out.println("\\n"); // \n
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: This is an side-effect of how regular expressions are read, why:
```
message.split("\\n");
```
This splits the message on the regex `\` followed by `n`, what gets compiled a [literal newline because of the `\n` escape](https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html)
```
message.split("\n");
```
This splits the message on the regex whats also gets compiled to a literal newline
Upvotes: 1 <issue_comment>username_4: `"\n"` will send as regexp the single character ascii 10.
`"\\n"` will send as regexp the string of length 2: backslash followed by n.
Both does not means the same but produce the same.
Upvotes: 1 |
2018/03/22 | 864 | 2,105 | <issue_start>username_0: I have an array (which is technically a string) of id numbers.
```
ids = "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
```
I want to make the ids into an array that looks like this:
```
ids = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
The only way I've found to do this is to use map.
```
id_numbers = ids.split(/,\s?/).map(&:to_i)
```
However, this lops off the first number in the array and replaces it with 0.
```
id_numbers = [0, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
Is there a better way to go about converting a string array into a regular array?<issue_comment>username_1: Since this is actually in JSON format the answer is easy:
```
require 'json'
id_json = "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
ids = JSON.load(id_json)
```
The reason your solution "lops off" the first number is because of the way you're splitting. The first "number" in your series is actually `"[1"` which to Ruby is not a number, so it converts to 0 by default.
Upvotes: 3 <issue_comment>username_2: If you do not wish to use `JSON`,
```
ids.scan(/\d+/).map(&:to_i)
#=> [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
If `ids` may contain the string representation of negative integers, change the regex to `/-?\d+/`.
Upvotes: 2 <issue_comment>username_3: To summarize: you get `0` as first element in the array because of non-digital character at the beginning of your string:
```
p '[1'.to_i #=> 0
```
Maybe there is a better way to recieve original string. If there is no other way to recieve that string you can simply get rid off first character and your own solution will work:
```
ids = "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]"
p ids[1..-1].split(",").map(&:to_i)
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]
```
[@username_1](https://stackoverflow.com/users/87189/username_1)'s and [@CarySwoveland](https://stackoverflow.com/users/256970/cary-swoveland)'s solutions work perfectly fine. Alternatively:
```
ids = "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]"
p ids.tr("^-0-9", ' ').split.map(&:to_i)
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11]
```
Keep in mind that `&:` first appeared in Ruby 1.8.7.
Upvotes: 0 |