title
stringlengths 24
151
| question
stringlengths 12
528
| answer
stringlengths 33
9.55k
| url
stringlengths 61
126
|
---|---|---|---|
linux - Expand a possible relative path in bash | As arguments to my script there are some file paths. Those can, of course, be relative (or contain ~). But for the functions I've written I need paths that are absolute, but do not have their symlinks resolved. |
MY_PATH=$(readlink -f $YOUR_ARG) will resolve relative paths like "./" and "../"
Consider this as well (source):
#!/bin/bash
dir_resolve()
{
cd "$1" 2>/dev/null || return $? # cd to desired directory; if fail, quell any error messages but return exit status
echo "`pwd -P`" # output full, link-resolved path
}
# sample usage
if abs_path="`dir_resolve \"$1\"`"
then
echo "$1 resolves to $abs_path"
echo pwd: `pwd` # function forks subshell, so working directory outside function is not affected
else
echo "Could not reach $1"
fi
| https://stackoverflow.com//questions/7126580/expand-a-possible-relative-path-in-bash |
linux - 64 bit ntohl() in C++? | The man pages for htonl() seem to suggest that you can only use it for up to 32 bit values. (In reality, ntohl() is defined for unsigned long, which on my platform is 32 bits. I suppose if the unsigned long were 8 bytes, it would work for 64 bit ints). |
Documentation: man htobe64 on Linux (glibc >= 2.9) or FreeBSD.
Unfortunately OpenBSD, FreeBSD and glibc (Linux) did not quite work together smoothly to create one (non-kernel-API) libc standard for this, during an attempt in 2009.
Currently, this short bit of preprocessor code:
#if defined(__linux__)
# include <endian.h>
#elif defined(__FreeBSD__) || defined(__NetBSD__)
# include <sys/endian.h>
#elif defined(__OpenBSD__)
# include <sys/types.h>
# define be16toh(x) betoh16(x)
# define be32toh(x) betoh32(x)
# define be64toh(x) betoh64(x)
#endif
(tested on Linux and OpenBSD) should hide the differences. It gives you the Linux/FreeBSD-style macros on those 4 platforms.
Use example:
#include <stdint.h> // For 'uint64_t'
uint64_t host_int = 123;
uint64_t big_endian;
big_endian = htobe64( host_int );
host_int = be64toh( big_endian );
It's the most "standard C library"-ish approach available at the moment.
| https://stackoverflow.com//questions/809902/64-bit-ntohl-in-c |
c++ - How do you find what version of libstdc++ library is installed on your linux machine? | I found the following command: strings /usr/lib/libstdc++.so.6 | grep GLIBC from here. It seems to work but this is an ad-hoc/heuristic method. |
To find which library is being used you could run
$ /sbin/ldconfig -p | grep stdc++
libstdc++.so.6 (libc6) => /usr/lib/libstdc++.so.6
The list of compatible versions for libstdc++ version 3.4.0 and above is provided by
$ strings /usr/lib/libstdc++.so.6 | grep LIBCXX
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
...
For earlier versions the symbol GLIBCPP is defined.
The date stamp of the library is defined in a macro __GLIBCXX__ or __GLIBCPP__ depending on the version:
// libdatestamp.cxx
#include <cstdio>
int main(int argc, char* argv[]){
#ifdef __GLIBCPP__
std::printf("GLIBCPP: %d\n",__GLIBCPP__);
#endif
#ifdef __GLIBCXX__
std::printf("GLIBCXX: %d\n",__GLIBCXX__);
#endif
return 0;
}
$ g++ libdatestamp.cxx -o libdatestamp
$ ./libdatestamp
GLIBCXX: 20101208
The table of datestamps of libstdc++ versions is listed in the documentation:
| https://stackoverflow.com//questions/10354636/how-do-you-find-what-version-of-libstdc-library-is-installed-on-your-linux-mac |
How to find Java Heap Size and Memory Used (Linux)? | How can I check Heap Size (and Used Memory) of a Java Application on Linux through the command line? |
Each Java process has a pid, which you first need to find with the jps command.
Once you have the pid, you can use jstat -gc [insert-pid-here] to find statistics of the behavior of the garbage collected heap.
jstat -gccapacity [insert-pid-here] will present information about memory pool generation and space capabilities.
jstat -gcutil [insert-pid-here] will present the utilization of each generation as a percentage of its capacity. Useful to get an at a glance view of usage.
See jstat docs on Oracle's site.
| https://stackoverflow.com//questions/12797560/how-to-find-java-heap-size-and-memory-used-linux |
Randomly shuffling lines in Linux / Bash | I have some files in linux. For example 2 and i need shuffling the files in one file. |
You should use shuf command =)
cat file1 file2 | shuf
Or with Perl :
cat file1 file2 | perl -MList::Util=shuffle -wne 'print shuffle <>;'
| https://stackoverflow.com//questions/17578873/randomly-shuffling-lines-in-linux-bash |
What is RSS and VSZ in Linux memory management | What are RSS and VSZ in Linux memory management? In a multithreaded environment how can both of these can be managed and tracked? |
RSS is the Resident Set Size and is used to show how much memory is allocated to that process and is in RAM. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory.
VSZ is the Virtual Memory Size. It includes all memory that the process can access, including memory that is swapped out, memory that is allocated, but not used, and memory that is from shared libraries.
So if process A has a 500K binary and is linked to 2500K of shared libraries, has 200K of stack/heap allocations of which 100K is actually in memory (rest is swapped or unused), and it has only actually loaded 1000K of the shared libraries and 400K of its own binary then:
RSS: 400K + 1000K + 100K = 1500K
VSZ: 500K + 2500K + 200K = 3200K
Since part of the memory is shared, many processes may use it, so if you add up all of the RSS values you can easily end up with more space than your system has.
The memory that is allocated also may not be in RSS until it is actually used by the program. So if your program allocated a bunch of memory up front, then uses it over time, you could see RSS going up and VSZ staying the same.
There is also PSS (proportional set size). This is a newer measure which tracks the shared memory as a proportion used by the current process. So if there were two processes using the same shared library from before:
PSS: 400K + (1000K/2) + 100K = 400K + 500K + 100K = 1000K
Threads all share the same address space, so the RSS, VSZ and PSS for each thread is identical to all of the other threads in the process. Use ps or top to view this information in linux/unix.
There is way more to it than this, to learn more check the following references:
http://manpages.ubuntu.com/manpages/en/man1/ps.1.html
https://web.archive.org/web/20120520221529/http://emilics.com/blog/article/mconsumption.html
Also see:
A way to determine a process's "real" memory usage, i.e. private dirty RSS?
| https://stackoverflow.com//questions/7880784/what-is-rss-and-vsz-in-linux-memory-management |
linux - Copy text from nano editor to shell | This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. |
Nano to Shell:
1. Using mouse to mark the text.
2. Right-Click the mouse in the Shell.
Within Nano:
1. CTRL+6 (or CTRL+Shift+6 or hold Shift and move cursor) for Mark Set and mark what you want (the end could do some extra help).
2. ALT+6 for copying the marked text.
3. CTRL+u at the place you want to paste.
or
1. CTRL+6 (or CTRL+Shift+6 or hold Shift and move cursor) for Mark Set and mark what you want (the end could do some extra help).
2. CTRL+k for cutting what you want to copy
3. CTRL+u for pasting what you have just cut because you just want to copy.
4. CTRL+u at the place you want to paste.
| https://stackoverflow.com//questions/30507022/copy-text-from-nano-editor-to-shell |
How to send HTML email using linux command line | I need to send email with html format. I have only linux command line and command "mail". |
This worked for me:
echo "<b>HTML Message goes here</b>" | mail -s "$(echo -e "This is the subject\nContent-Type: text/html")" [email protected]
| https://stackoverflow.com//questions/2591755/how-to-send-html-email-using-linux-command-line |
Good Linux (Ubuntu) SVN client | Subversion has a superb client on Windows (Tortoise, of course). Everything I've tried on Linux just - well - sucks in comparison.... |
Disclaimer: A long long time ago I was one of the developers for RabbitVCS (previously known as NautilusSvn).
If you use Nautilus then you might be interested in RabbitVCS (mentioned earlier by Trevor Bramble). It's an unadulterated clone of TortoiseSVN for Nautilus written in Python. While there's still a lot of improvement to be made (especially in the area of performance) some people seem to be quite satisfied with it.
The name is quite fitting for the project, because the story it refers to quite accurately depicts the development pace (meaning long naps). If you do choose to start using RabbitVCS as your version control client, you're probably going to have to get your hands dirty.
| https://stackoverflow.com//questions/86550/good-linux-ubuntu-svn-client |
linux - couldn't connect to server 127.0.0.1 shell/mongo.js | when i setup mongodb in my ubuntu , i try : ./mongo it show this error : |
Manually remove the lockfile: sudo rm /var/lib/mongodb/mongod.lock
Run the repair script: sudo -u mongodb mongod -f /etc/mongodb.conf --repair
Please note the following:
You must run this command as the mongodb user. If you run it as root,
then root will own files in /var/lib/mongodb/ that are necessary to
run the mongodb daemon and therefore when the daemon trys to run
later as the mongodb user, it won't have permissions to start. In
that case you'll get this error: Unable to create / open lock file
for lockfilepath: /var/lib/mongodb/mongod.lock errno:13 Permission
denied, terminating.
On Ubuntu, you must specify the configuration file /etc/mongodb.conf
using the -f flag. Otherwise it will look for the data files in the
wrong place and you will see the following error: dbpath (/data/db/)
does not exist, terminating.
| https://stackoverflow.com//questions/5726032/couldnt-connect-to-server-127-0-0-1-shell-mongo-js |
linux - Run Python script at startup in Ubuntu | I have a short Python script that needs to run at startup - Ubuntu 13.10. I have tried everything I can think of but can't get it to run. The script: |
Instructions
Copy the python file to /bin:
sudo cp -i /path/to/your_script.py /bin
Add A New Cron Job:
sudo crontab -e
Scroll to the bottom and add the following line (after all the #'s):
@reboot python /bin/your_script.py &
The “&” at the end of the line means the command is run in the background and it won’t stop the system booting up.
Test it:
sudo reboot
Practical example:
Add this file to your Desktop: test_code.py (run it to check that it works for you)
from os.path import expanduser
import datetime
file = open(expanduser("~") + '/Desktop/HERE.txt', 'w')
file.write("It worked!\n" + str(datetime.datetime.now()))
file.close()
Run the following commands:
sudo cp -i ~/Desktop/test_code.py /bin
sudo crontab -e
Add the following line and save it:
@reboot python /bin/test_code.py &
Now reboot your computer and you should find a new file on your Desktop: HERE.txt
| https://stackoverflow.com//questions/24518522/run-python-script-at-startup-in-ubuntu |
linux - Read and write to binary files in C? | Does anyone have an example of code that can write to a binary file. And also code that can read a binary file and output to screen. Looking at examples I can write to a file ok But when I try to read from a file it is not outputting correctly. |
Reading and writing binary files is pretty much the same as any other file, the only difference is how you open it:
unsigned char buffer[10];
FILE *ptr;
ptr = fopen("test.bin","rb"); // r for read, b for binary
fread(buffer,sizeof(buffer),1,ptr); // read 10 bytes to our buffer
You said you can read it, but it's not outputting correctly... keep in mind that when you "output" this data, you're not reading ASCII, so it's not like printing a string to the screen:
for(int i = 0; i<10; i++)
printf("%u ", buffer[i]); // prints a series of bytes
Writing to a file is pretty much the same, with the exception that you're using fwrite() instead of fread():
FILE *write_ptr;
write_ptr = fopen("test.bin","wb"); // w for write, b for binary
fwrite(buffer,sizeof(buffer),1,write_ptr); // write 10 bytes from our buffer
Since we're talking Linux.. there's an easy way to do a sanity check. Install hexdump on your system (if it's not already on there) and dump your file:
mike@mike-VirtualBox:~/C$ hexdump test.bin
0000000 457f 464c 0102 0001 0000 0000 0000 0000
0000010 0001 003e 0001 0000 0000 0000 0000 0000
...
Now compare that to your output:
mike@mike-VirtualBox:~/C$ ./a.out
127 69 76 70 2 1 1 0 0 0
hmm, maybe change the printf to a %x to make this a little clearer:
mike@mike-VirtualBox:~/C$ ./a.out
7F 45 4C 46 2 1 1 0 0 0
Hey, look! The data matches up now*. Awesome, we must be reading the binary file correctly!
*Note the bytes are just swapped on the output but that data is correct, you can adjust for this sort of thing
| https://stackoverflow.com//questions/17598572/read-and-write-to-binary-files-in-c |
How do I find all the files that were created today in Unix/Linux? | How do I find all the files that were create only today and not in 24 hour period in unix/linux |
On my Fedora 10 system, with findutils-4.4.0-1.fc10.i386:
find <path> -daystart -ctime 0 -print
The -daystart flag tells it to calculate from the start of today instead of from 24 hours ago.
Note however that this will actually list files created or modified in the last day. find has no options that look at the true creation date of the file.
| https://stackoverflow.com//questions/801095/how-do-i-find-all-the-files-that-were-created-today-in-unix-linux |
How to send HTML email using linux command line | I need to send email with html format. I have only linux command line and command "mail". |
This worked for me:
echo "<b>HTML Message goes here</b>" | mail -s "$(echo -e "This is the subject\nContent-Type: text/html")" [email protected]
| https://stackoverflow.com//questions/2591755/how-to-send-html-email-using-linux-command-line |
ruby on rails - cache resources exhausted Imagemagick | I'm using Imagemagick on a rails app with Minimagick and I generate some pictogram with it. |
Find the policy.xml with find / -name "policy.xml"
something like /etc/ImageMagick-6/policy.xml
and change
<policy domain="resource" name="disk" value="1GiB"/>
to
<policy domain="resource" name="disk" value="8GiB"/>
refer to convert fails due to resource limits
Memory issues
| https://stackoverflow.com//questions/31407010/cache-resources-exhausted-imagemagick |
linux - Compare integer in bash, unary operator expected | The following code gives |
Your problem arises from the fact that $i has a blank value when your statement fails. Always quote your variables when performing comparisons if there is the slightest chance that one of them may be empty, e.g.:
if [ "$i" -ge 2 ] ; then
...
fi
This is because of how the shell treats variables. Assume the original example,
if [ $i -ge 2 ] ; then ...
The first thing that the shell does when executing that particular line of code is substitute the value of $i, just like your favorite editor's search & replace function would. So assume that $i is empty or, even more illustrative, assume that $i is a bunch of spaces! The shell will replace $i as follows:
if [ -ge 2 ] ; then ...
Now that variable substitutions are done, the shell proceeds with the comparison and.... fails because it cannot see anything intelligible to the left of -gt. However, quoting $i:
if [ "$i" -ge 2 ] ; then ...
becomes:
if [ " " -ge 2 ] ; then ...
The shell now sees the double-quotes, and knows that you are actually comparing four blanks to 2 and will skip the if.
You also have the option of specifying a default value for $i if $i is blank, as follows:
if [ "${i:-0}" -ge 2 ] ; then ...
This will substitute the value 0 instead of $i is $i is undefined. I still maintain the quotes because, again, if $i is a bunch of blanks then it does not count as undefined, it will not be replaced with 0, and you will run into the problem once again.
Please read this when you have the time. The shell is treated like a black box by many, but it operates with very few and very simple rules - once you are aware of what those rules are (one of them being how variables work in the shell, as explained above) the shell will have no more secrets for you.
| https://stackoverflow.com//questions/408975/compare-integer-in-bash-unary-operator-expected |
Quickly create a large file on a Linux system | How can I quickly create a large file on a Linux (Red Hat Linux) system? |
dd from the other answers is a good solution, but it is slow for this purpose. In Linux (and other POSIX systems), we have fallocate, which uses the desired space without having to actually writing to it, works with most modern disk based file systems, very fast:
For example:
fallocate -l 10G gentoo_root.img
| https://stackoverflow.com//questions/257844/quickly-create-a-large-file-on-a-linux-system |
linux - How to get memory usage at runtime using C++? | I need to get the mem usage VIRT and RES at run time of my program and display them. |
On Linux, I've never found an ioctl() solution. For our applications, we coded a general utility routine based on reading files in /proc/pid. There are a number of these files which give differing results. Here's the one we settled on (the question was tagged C++, and we handled I/O using C++ constructs, but it should be easily adaptable to C i/o routines if you need to):
#include <unistd.h>
#include <ios>
#include <iostream>
#include <fstream>
#include <string>
//////////////////////////////////////////////////////////////////////////////
//
// process_mem_usage(double &, double &) - takes two doubles by reference,
// attempts to read the system-dependent data for a process' virtual memory
// size and resident set size, and return the results in KB.
//
// On failure, returns 0.0, 0.0
void process_mem_usage(double& vm_usage, double& resident_set)
{
using std::ios_base;
using std::ifstream;
using std::string;
vm_usage = 0.0;
resident_set = 0.0;
// 'file' stat seems to give the most reliable results
//
ifstream stat_stream("/proc/self/stat",ios_base::in);
// dummy vars for leading entries in stat that we don't care about
//
string pid, comm, state, ppid, pgrp, session, tty_nr;
string tpgid, flags, minflt, cminflt, majflt, cmajflt;
string utime, stime, cutime, cstime, priority, nice;
string O, itrealvalue, starttime;
// the two fields we want
//
unsigned long vsize;
long rss;
stat_stream >> pid >> comm >> state >> ppid >> pgrp >> session >> tty_nr
>> tpgid >> flags >> minflt >> cminflt >> majflt >> cmajflt
>> utime >> stime >> cutime >> cstime >> priority >> nice
>> O >> itrealvalue >> starttime >> vsize >> rss; // don't care about the rest
stat_stream.close();
long page_size_kb = sysconf(_SC_PAGE_SIZE) / 1024; // in case x86-64 is configured to use 2MB pages
vm_usage = vsize / 1024.0;
resident_set = rss * page_size_kb;
}
int main()
{
using std::cout;
using std::endl;
double vm, rss;
process_mem_usage(vm, rss);
cout << "VM: " << vm << "; RSS: " << rss << endl;
}
| https://stackoverflow.com//questions/669438/how-to-get-memory-usage-at-runtime-using-c |
bash - How to change the output color of echo in Linux | I am trying to print a text in the terminal using echo command. |
You can use these ANSI escape codes:
Black 0;30 Dark Gray 1;30
Red 0;31 Light Red 1;31
Green 0;32 Light Green 1;32
Brown/Orange 0;33 Yellow 1;33
Blue 0;34 Light Blue 1;34
Purple 0;35 Light Purple 1;35
Cyan 0;36 Light Cyan 1;36
Light Gray 0;37 White 1;37
And then use them like this in your script:
# .---------- constant part!
# vvvv vvvv-- the code from above
RED='\033[0;31m'
NC='\033[0m' # No Color
printf "I ${RED}love${NC} Stack Overflow\n"
which prints love in red.
From @james-lim's comment, if you are using the echo command, be sure to use the -e flag to allow backslash escapes.
# .---------- constant part!
# vvvv vvvv-- the code from above
RED='\033[0;31m'
NC='\033[0m' # No Color
echo -e "I ${RED}love${NC} Stack Overflow"
(don't add "\n" when using echo unless you want to add an additional empty line)
| https://stackoverflow.com//questions/5947742/how-to-change-the-output-color-of-echo-in-linux |
linux - rsync not synchronizing .htaccess file | I am trying to rsync directory A of server1 with directory B of server2. |
This is due to the fact that * is by default expanded to all files in the current working directory except the files whose name starts with a dot. Thus, rsync never receives these files as arguments.
You can pass . denoting current working directory to rsync:
rsync -av . server2::sharename/B
This way rsync will look for files to transfer in the current working directory as opposed to looking for them in what * expands to.
Alternatively, you can use the following command to make * expand to all files including those which start with a dot:
shopt -s dotglob
See also shopt manpage.
| https://stackoverflow.com//questions/9046749/rsync-not-synchronizing-htaccess-file |
How do I get Windows to go as fast as Linux for compiling C++? | I know this is not so much a programming question but it is relevant. |
Unless a hardcore Windows systems hacker comes along, you're not going to get more than partisan comments (which I won't do) and speculation (which is what I'm going to try).
File system - You should try the same operations (including the dir) on the same filesystem. I came across this which benchmarks a few filesystems for various parameters.
Caching. I once tried to run a compilation on Linux on a RAM disk and found that it was slower than running it on disk thanks to the way the kernel takes care of caching. This is a solid selling point for Linux and might be the reason why the performance is so different.
Bad dependency specifications on Windows. Maybe the chromium dependency specifications for Windows are not as correct as for Linux. This might result in unnecessary compilations when you make a small change. You might be able to validate this using the same compiler toolchain on Windows.
| https://stackoverflow.com//questions/6916011/how-do-i-get-windows-to-go-as-fast-as-linux-for-compiling-c |
linux - Find unique lines | How can I find the unique lines and remove all duplicates from a file?
My input file is |
uniq has the option you need:
-u, --unique
only print unique lines
$ cat file.txt
1
1
2
3
5
5
7
7
$ uniq -u file.txt
2
3
| https://stackoverflow.com//questions/13778273/find-unique-lines |
What does the "no version information available" error from linux dynamic linker mean? | In our product we ship some linux binaries that dynamically link to system libraries like "libpam". On some customer systems we get the following error on stderr when the program runs: |
The "no version information available" means that the library version number is lower on the shared object. For example, if your major.minor.patch number is 7.15.5 on the machine where you build the binary, and the major.minor.patch number is 7.12.1 on the installation machine, ld will print the warning.
You can fix this by compiling with a library (headers and shared objects) that matches the shared object version shipped with your target OS. E.g., if you are going to install to RedHat 3.4.6-9 you don't want to compile on Debian 4.1.1-21. This is one of the reasons that most distributions ship for specific linux distro numbers.
Otherwise, you can statically link. However, you don't want to do this with something like PAM, so you want to actually install a development environment that matches your client's production environment (or at least install and link against the correct library versions.)
Advice you get to rename the .so files (padding them with version numbers,) stems from a time when shared object libraries did not use versioned symbols. So don't expect that playing with the .so.n.n.n naming scheme is going to help (much - it might help if you system has been trashed.)
You last option will be compiling with a library with a different minor version number, using a custom linking script:
http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/gnu-linker/scripts.html
To do this, you'll need to write a custom script, and you'll need a custom installer that runs ld against your client's shared objects, using the custom script. This requires that your client have gcc or ld on their production system.
| https://stackoverflow.com//questions/137773/what-does-the-no-version-information-available-error-from-linux-dynamic-linker |
linux - How to update-alternatives to Python 3 without breaking apt? | The other day I decided that I wanted the command python to default to firing up python3 instead of python2. |
Per Debian policy, python refers to Python 2 and python3 refers to Python 3. Don't try to change this system-wide or you are in for the sort of trouble you already discovered.
Virtual environments allow you to run an isolated Python installation with whatever version of Python and whatever libraries you need without messing with the system Python install.
With recent Python 3, venv is part of the standard library; with older versions, you might need to install python3-venv or a similar package.
$HOME~$ python --version
Python 2.7.11
$HOME~$ python3 -m venv myenv
... stuff happens ...
$HOME~$ . ./myenv/bin/activate
(myenv) $HOME~$ type python # "type" is preferred over which; see POSIX
python is /home/you/myenv/bin/python
(myenv) $HOME~$ python --version
Python 3.5.1
A common practice is to have a separate environment for each project you work on, anyway; but if you want this to look like it's effectively system-wide for your own login, you could add the activation stanza to your .profile or similar.
| https://stackoverflow.com//questions/43062608/how-to-update-alternatives-to-python-3-without-breaking-apt |
Is there a way to figure out what is using a Linux kernel module? | If I load a kernel module and list the loaded modules with lsmod, I can get the "use count" of the module (number of other modules with a reference to the module). Is there a way to figure out what is using a module, though? |
Actually, there seems to be a way to list processes that claim a module/driver - however, I haven't seen it advertised (outside of Linux kernel documentation), so I'll jot down my notes here:
First of all, many thanks for @haggai_e's answer; the pointer to the functions try_module_get and try_module_put as those responsible for managing the use count (refcount) was the key that allowed me to track down the procedure.
Looking further for this online, I somehow stumbled upon the post Linux-Kernel Archive: [PATCH 1/2] tracing: Reduce overhead of module tracepoints; which finally pointed to a facility present in the kernel, known as (I guess) "tracing"; the documentation for this is in the directory Documentation/trace - Linux kernel source tree. In particular, two files explain the tracing facility, events.txt and ftrace.txt.
But, there is also a short "tracing mini-HOWTO" on a running Linux system in /sys/kernel/debug/tracing/README (see also I'm really really tired of people saying that there's no documentation…); note that in the kernel source tree, this file is actually generated by the file kernel/trace/trace.c. I've tested this on Ubuntu natty, and note that since /sys is owned by root, you have to use sudo to read this file, as in sudo cat or
sudo less /sys/kernel/debug/tracing/README
... and that goes for pretty much all other operations under /sys which will be described here.
First of all, here is a simple minimal module/driver code (which I put together from the referred resources), which simply creates a /proc/testmod-sample file node, which returns the string "This is testmod." when it is being read; this is testmod.c:
/*
https://github.com/spotify/linux/blob/master/samples/tracepoints/tracepoint-sample.c
https://www.linux.com/learn/linux-training/37985-the-kernel-newbie-corner-kernel-debugging-using-proc-qsequenceq-files-part-1
*/
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h> // for sequence files
struct proc_dir_entry *pentry_sample;
char *defaultOutput = "This is testmod.";
static int my_show(struct seq_file *m, void *v)
{
seq_printf(m, "%s\n", defaultOutput);
return 0;
}
static int my_open(struct inode *inode, struct file *file)
{
return single_open(file, my_show, NULL);
}
static const struct file_operations mark_ops = {
.owner = THIS_MODULE,
.open = my_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int __init sample_init(void)
{
printk(KERN_ALERT "sample init\n");
pentry_sample = proc_create(
"testmod-sample", 0444, NULL, &mark_ops);
if (!pentry_sample)
return -EPERM;
return 0;
}
static void __exit sample_exit(void)
{
printk(KERN_ALERT "sample exit\n");
remove_proc_entry("testmod-sample", NULL);
}
module_init(sample_init);
module_exit(sample_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Mathieu Desnoyers et al.");
MODULE_DESCRIPTION("based on Tracepoint sample");
This module can be built with the following Makefile (just have it placed in the same directory as testmod.c, and then run make in that same directory):
CONFIG_MODULE_FORCE_UNLOAD=y
# for oprofile
DEBUG_INFO=y
EXTRA_CFLAGS=-g -O0
obj-m += testmod.o
# mind the tab characters needed at start here:
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
When this module/driver is built, the output is a kernel object file, testmod.ko.
At this point, we can prepare the event tracing related to try_module_get and try_module_put; those are in /sys/kernel/debug/tracing/events/module:
$ sudo ls /sys/kernel/debug/tracing/events/module
enable filter module_free module_get module_load module_put module_request
Note that on my system, tracing is by default enabled:
$ sudo cat /sys/kernel/debug/tracing/tracing_enabled
1
... however, the module tracing (specifically) is not:
$ sudo cat /sys/kernel/debug/tracing/events/module/enable
0
Now, we should first make a filter, that will react on the module_get, module_put etc events, but only for the testmod module. To do that, we should first check the format of the event:
$ sudo cat /sys/kernel/debug/tracing/events/module/module_put/format
name: module_put
ID: 312
format:
...
field:__data_loc char[] name; offset:20; size:4; signed:1;
print fmt: "%s call_site=%pf refcnt=%d", __get_str(name), (void *)REC->ip, REC->refcnt
Here we can see that there is a field called name, which holds the driver name, which we can filter against. To create a filter, we simply echo the filter string into the corresponding file:
sudo bash -c "echo name == testmod > /sys/kernel/debug/tracing/events/module/filter"
Here, first note that since we have to call sudo, we have to wrap the whole echo redirection as an argument command of a sudo-ed bash. Second, note that since we wrote to the "parent" module/filter, not the specific events (which would be module/module_put/filter etc), this filter will be applied to all events listed as "children" of module directory.
Finally, we enable tracing for module:
sudo bash -c "echo 1 > /sys/kernel/debug/tracing/events/module/enable"
From this point on, we can read the trace log file; for me, reading the blocking,
"piped" version of the trace file worked - like this:
sudo cat /sys/kernel/debug/tracing/trace_pipe | tee tracelog.txt
At this point, we will not see anything in the log - so it is time to load (and utilize, and remove) the driver (in a different terminal from where trace_pipe is being read):
$ sudo insmod ./testmod.ko
$ cat /proc/testmod-sample
This is testmod.
$ sudo rmmod testmod
If we go back to the terminal where trace_pipe is being read, we should see something like:
# tracer: nop
#
# TASK-PID CPU# TIMESTAMP FUNCTION
# | | | | |
insmod-21137 [001] 28038.101509: module_load: testmod
insmod-21137 [001] 28038.103904: module_put: testmod call_site=sys_init_module refcnt=2
rmmod-21354 [000] 28080.244448: module_free: testmod
That is pretty much all we will obtain for our testmod driver - the refcount changes only when the driver is loaded (insmod) or unloaded (rmmod), not when we do a read through cat. So we can simply interrupt the read from trace_pipe with CTRL+C in that terminal; and to stop the tracing altogether:
sudo bash -c "echo 0 > /sys/kernel/debug/tracing/tracing_enabled"
Here, note that most examples refer to reading the file /sys/kernel/debug/tracing/trace instead of trace_pipe as here. However, one problem is that this file is not meant to be "piped" (so you shouldn't run a tail -f on this trace file); but instead you should re-read the trace after each operation. After the first insmod, we would obtain the same output from cat-ing both trace and trace_pipe; however, after the rmmod, reading the trace file would give:
<...>-21137 [001] 28038.101509: module_load: testmod
<...>-21137 [001] 28038.103904: module_put: testmod call_site=sys_init_module refcnt=2
rmmod-21354 [000] 28080.244448: module_free: testmod
... that is: at this point, the insmod had already been exited for long, and so it doesn't exist anymore in the process list - and therefore cannot be found via the recorded process ID (PID) at the time - thus we get a blank <...> as process name. Therefore, it is better to log (via tee) a running output from trace_pipe in this case. Also, note that in order to clear/reset/erase the trace file, one simply writes a 0 to it:
sudo bash -c "echo 0 > /sys/kernel/debug/tracing/trace"
If this seems counterintuitive, note that trace is a special file, and will always report a file size of zero anyways:
$ sudo ls -la /sys/kernel/debug/tracing/trace
-rw-r--r-- 1 root root 0 2013-03-19 06:39 /sys/kernel/debug/tracing/trace
... even if it is "full".
Finally, note that if we didn't implement a filter, we would have obtained a log of all module calls on the running system - which would log any call (also background) to grep and such, as those use the binfmt_misc module:
...
tr-6232 [001] 25149.815373: module_put: binfmt_misc call_site=search_binary_handler refcnt=133194
..
grep-6231 [001] 25149.816923: module_put: binfmt_misc call_site=search_binary_handler refcnt=133196
..
cut-6233 [000] 25149.817842: module_put: binfmt_misc call_site=search_binary_handler refcnt=129669
..
sudo-6234 [001] 25150.289519: module_put: binfmt_misc call_site=search_binary_handler refcnt=133198
..
tail-6235 [000] 25150.316002: module_put: binfmt_misc call_site=search_binary_handler refcnt=129671
... which adds quite a bit of overhead (in both log data ammount, and processing time required to generate it).
While looking this up, I stumbled upon Debugging Linux Kernel by Ftrace PDF, which refers to a tool trace-cmd, which pretty much does the similar as above - but through an easier command line interface. There is also a "front-end reader" GUI for trace-cmd called KernelShark; both of these are also in Debian/Ubuntu repositories via sudo apt-get install trace-cmd kernelshark. These tools could be an alternative to the procedure described above.
Finally, I'd just note that, while the above testmod example doesn't really show use in context of multiple claims, I have used the same tracing procedure to discover that an USB module I'm coding, was repeatedly claimed by pulseaudio as soon as the USB device was plugged in - so the procedure seems to work for such use cases.
| https://stackoverflow.com//questions/448999/is-there-a-way-to-figure-out-what-is-using-a-linux-kernel-module |
linux - Docker can't connect to docker daemon | After I update my Docker version to 0.8.0, I get an error message while entering sudo docker version: |
Linux
The Post-installation steps for Linux documentation reveals the following steps:
Create the docker group.
sudo groupadd docker
Add the user to the docker group.
sudo usermod -aG docker $(whoami)
Log out and log back in to ensure docker runs with correct permissions.
Start docker.
sudo service docker start
Mac OS X
As Dayel Ostraco says is necessary to add environments variables:
docker-machine start # Start virtual machine for docker
docker-machine env # It's helps to get environment variables
eval "$(docker-machine env default)" # Set environment variables
The docker-machine start command outputs the comments to guide the process.
| https://stackoverflow.com//questions/21871479/docker-cant-connect-to-docker-daemon |
linux - Keep the window's name fixed in tmux | I want to keep the windows' name fixed after I rename it. But after I renaming it, they keep changing when I execute commands. |
As shown in a comment to the main post: set-option -g allow-rename off in your .tmux.conf file
| https://stackoverflow.com//questions/6041178/keep-the-windows-name-fixed-in-tmux |
linux - How do I uninstall a program installed with the Appimage Launcher? | This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. |
Since an AppImage is not "installed", you don't need to "uninstall" it. Just delete the AppImage file and the application is gone. Additionally you may want to remove menu entry by deleting the desktop file from $HOME/.local/share/applications/.
Files and directories with names starting with a full stop (dot) (.example) are hidden - you might need to turn hidden files visible. You can probably find it somewhere in the settings of the file manager you use or in many file managers you can do that with ctrl+h.
| https://stackoverflow.com//questions/43680226/how-do-i-uninstall-a-program-installed-with-the-appimage-launcher |
linux - git submodule update failed with 'fatal: detected dubious ownership in repository at' | I mounted a new hdd in my linux workstation. It looks working well. I want to download some repo in the new disk. So I execute git clone XXX, and it works well. But when I cd in the folder, and execute git submodule update --init --recursive. It failed with |
Silence all safe.directory warnings
tl;dr
Silence all warnings related to git's safe.directory system. Be sure to understand what you're doing.
git config --global --add safe.directory '*'
Long version
Adapted from this post on I cannot add the parent directory to safe.directory in Git.
I had the same issue and resolved it by disabling safe directory checks, which will end all the "unsafe repository" errors.
This can be done by running the following command1:
git config --global --add safe.directory '*'
Which will add the following setting to your global .gitconfig file:
[safe]
directory = *
Before disabling, make sure you understand this security measure, and why it exists. You should not do this if your repositories are stored on a shared drive.
However, if you are the sole user of your machine 100% of the time, and your repositories are stored locally, then disabling this check should, theoretically, pose no increased risk.
Also note that you can't currently combine this with a file path, which would be relevant in my case. The command doesn't interpret the wildcard * as an operator per say– it just takes the "*" argument to mean "disable safe repository checks/ consider all repositories as safe".
1 - If this fails in your particular terminal program in Windows, try surrounding the wildcard with double quotes instead of single (Via this GitHub issue):
git config --global --add safe.directory "*"
| https://stackoverflow.com//questions/72978485/git-submodule-update-failed-with-fatal-detected-dubious-ownership-in-repositor |
linux - Difference between checkout and export in SVN | What is the exact difference between SVN checkout and SVN export? |
svn export simply extracts all the files from a revision and does not allow revision control on it. It also does not litter each directory with .svn directories.
svn checkout allows you to use version control in the directory made, e.g. your standard commands such as svn update and svn commit.
| https://stackoverflow.com//questions/419467/difference-between-checkout-and-export-in-svn |
linux - How do I SET the GOPATH environment variable on Ubuntu? What file must I edit? | I am trying to do a go get: |
New Way:
Check out this answer.
Note: Not for trying out a go application / binaries on your host machine using go install [repo url], in such cases you still have to use the old way.
Old Way:
Just add the following lines to ~/.bashrc and this will persist. However, you can use other paths you like as GOPATH instead of $HOME/go in my sample.
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
| https://stackoverflow.com//questions/21001387/how-do-i-set-the-gopath-environment-variable-on-ubuntu-what-file-must-i-edit |
memory - Understanding the Linux oom-killer's logs | My app was killed by the oom-killer. It is Ubuntu 11.10 running on a live USB with no swap and the PC has 1 Gig of RAM. The only app running (other than all the built in Ubuntu stuff) is my program flasherav. Note that /tmp is memory mapped and at the time of the crash had about 200MB of files in it (so was taking up ~200MB of RAM). |
Memory management in Linux is a bit tricky to understand, and I can't say I fully understand it yet, but I'll try to share a little bit of my experience and knowledge.
Short answer to your question: Yes there are other stuff included than whats in the list.
What's being shown in your list is applications run in userspace. The kernel uses memory for itself and modules, on top of that it also has a lower limit of free memory that you can't go under. When you've reached that level it will try to free up resources, and when it can't do that anymore, you end up with an OOM problem.
From the last line of your list you can read that the kernel reports a total-vm usage of: 1498536kB (1,5GB), where the total-vm includes both your physical RAM and swap space. You stated you don't have any swap but the kernel seems to think otherwise since your swap space is reported to be full (Total swap = 524284kB, Free swap = 0kB) and it reports a total vmem size of 1,5GB.
Another thing that can complicate things further is memory fragmentation. You can hit the OOM killer when the kernel tries to allocate lets say 4096kB of continous memory, but there are no free ones availible.
Now that alone probably won't help you solve the actual problem. I don't know if it's normal for your program to require that amount of memory, but I would recommend to try a static code analyzer like cppcheck to check for memory leaks or file descriptor leaks. You could also try to run it through Valgrind to get a bit more information out about memory usage.
| https://stackoverflow.com//questions/9199731/understanding-the-linux-oom-killers-logs |
linux - Total size of the contents of all the files in a directory | This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. |
If you want the 'apparent size' (that is the number of bytes in each file), not size taken up by files on the disk, use the -b or --bytes option (if you got a Linux system with GNU coreutils):
% du -sbh <directory>
| https://stackoverflow.com//questions/1241801/total-size-of-the-contents-of-all-the-files-in-a-directory |
linux - select vs poll vs epoll | I am designing a new server which needs to support thousands of UDP connections (somewhere around 100,000 sessions). Any input or suggestions on which one to use? |
The answer is epoll if you're using Linux, kqueue if you're using FreeBSD or Mac OS X, and i/o completion ports if you're on Windows.
Some additional things you'll (almost certainly) want to research are:
Load balancing techniques
Multi-threaded networking
Database architecture
Perfect hash tables
Additionally, it is important to note that UDP does not have "connections" as opposed to TCP. It would also be in your best interest to start small and scale larger since debugging network-based solutions can be challenging.
| https://stackoverflow.com//questions/4039832/select-vs-poll-vs-epoll |
math - How do I divide in the Linux console? | I have to variables and I want to find the value of one divided by the other. What commands should I use to do this? |
In the bash shell, surround arithmetic expressions with $(( ... ))
$ echo $(( 7 / 3 ))
2
Although I think you are limited to integers.
| https://stackoverflow.com//questions/1088098/how-do-i-divide-in-the-linux-console |
linux - FUSE error: Transport endpoint is not connected | I'm trying to implement the FUSE filesystem. I am receiving this error: |
This typically is caused by the mount directory being left mounted due to a crash of your filesystem. Go to the parent directory of the mount point and enter fusermount -u YOUR_MNT_DIR.
If this doesn't do the trick, do sudo umount -l YOUR_MNT_DIR.
| https://stackoverflow.com//questions/16002539/fuse-error-transport-endpoint-is-not-connected |
linux - How to recursively find and list the latest modified files in a directory with subdirectories and times | Operating system: Linux |
Try this one:
#!/bin/bash
find $1 -type f -exec stat --format '%Y :%y %n' "{}" \; | sort -nr | cut -d: -f2- | head
Execute it with the path to the directory where it should start scanning recursively (it supports filenames with spaces).
If there are lots of files it may take a while before it returns anything. Performance can be improved if we use xargs instead:
#!/bin/bash
find $1 -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head
which is a bit faster.
| https://stackoverflow.com//questions/5566310/how-to-recursively-find-and-list-the-latest-modified-files-in-a-directory-with-s |
How to remove ^[, and all of the escape sequences in a file using linux shell scripting | We want to remove ^[, and all of the escape sequences. |
Are you looking for ansifilter?
Two things you can do: enter the literal escape (in bash:)
Using keyboard entry:
sed 's/Ctrl-vEsc//g'
alternatively
sed 's/Ctrl-vCtrl-[//g'
Or you can use character escapes:
sed 's/\x1b//g'
or for all control characters:
sed 's/[\x01-\x1F\x7F]//g' # NOTE: zaps TAB character too!
| https://stackoverflow.com//questions/6534556/how-to-remove-and-all-of-the-escape-sequences-in-a-file-using-linux-shell-sc |
java - Moving from JDK 1.7 to JDK 1.8 on Ubuntu | I am on UBUNTU. JDK version currently installed is: |
This is what I do on debian - I suspect it should work on ubuntu (amend the version as required + adapt the folder where you want to copy the JDK files as you wish, I'm using /opt/jdk):
wget --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u71-b15/jdk-8u71-linux-x64.tar.gz
sudo mkdir /opt/jdk
sudo tar -zxf jdk-8u71-linux-x64.tar.gz -C /opt/jdk/
rm jdk-8u71-linux-x64.tar.gz
Then update-alternatives:
sudo update-alternatives --install /usr/bin/java java /opt/jdk/jdk1.8.0_71/bin/java 1
sudo update-alternatives --install /usr/bin/javac javac /opt/jdk/jdk1.8.0_71/bin/javac 1
Select the number corresponding to the /opt/jdk/jdk1.8.0_71/bin/java when running the following commands:
sudo update-alternatives --config java
sudo update-alternatives --config javac
Finally, verify that the correct version is selected:
java -version
javac -version
| https://stackoverflow.com//questions/30177455/moving-from-jdk-1-7-to-jdk-1-8-on-ubuntu |
c - Is it possible to determine the thread holding a mutex? | Firstly, I use pthread library to write multithreading C programs. Threads always hung by their waited mutexes. When I use the strace utility to find a thread in the FUTEX_WAIT status, I want to know which thread holds that mutex at that time. But I don't know how I could I do it. Are there any utilities that could do that? |
You can use knowledge of the mutex internals to do this. Ordinarily this wouldn't be a very good idea, but it's fine for debugging.
Under Linux with the NPTL implementation of pthreads (which is any modern glibc), you can examine the __data.__owner member of the pthread_mutex_t structure to find out the thread that currently has it locked. This is how to do it after attaching to the process with gdb:
(gdb) thread 2
[Switching to thread 2 (Thread 0xb6d94b90 (LWP 22026))]#0 0xb771f424 in __kernel_vsyscall ()
(gdb) bt
#0 0xb771f424 in __kernel_vsyscall ()
#1 0xb76fec99 in __lll_lock_wait () from /lib/i686/cmov/libpthread.so.0
#2 0xb76fa0c4 in _L_lock_89 () from /lib/i686/cmov/libpthread.so.0
#3 0xb76f99f2 in pthread_mutex_lock () from /lib/i686/cmov/libpthread.so.0
#4 0x080484a6 in thread (x=0x0) at mutex_owner.c:8
#5 0xb76f84c0 in start_thread () from /lib/i686/cmov/libpthread.so.0
#6 0xb767784e in clone () from /lib/i686/cmov/libc.so.6
(gdb) up 4
#4 0x080484a6 in thread (x=0x0) at mutex_owner.c:8
8 pthread_mutex_lock(&mutex);
(gdb) print mutex.__data.__owner
$1 = 22025
(gdb)
(I switch to the hung thread; do a backtrace to find the pthread_mutex_lock() it's stuck on; change stack frames to find out the name of the mutex that it's trying to lock; then print the owner of that mutex). This tells me that the thread with LWP ID 22025 is the culprit.
You can then use thread find 22025 to find out the gdb thread number for that thread and switch to it.
| https://stackoverflow.com//questions/3483094/is-it-possible-to-determine-the-thread-holding-a-mutex |
linux - When grep "\\" XXFile I got "Trailing Backslash" | Now I want to find whether there are lines containing '\' character. I tried grep "\\" XXFile but it hints "Trailing Backslash". But when I tried grep '\\' XXFile it is OK. Could anyone explain why the first case cannot run? Thanks. |
The difference is in how the shell treats the backslashes:
When you write "\\" in double quotes, the shell interprets the backslash escape and ends up passing the string \ to grep. Grep then sees a backslash with no following character, so it emits a "trailing backslash" warning. If you want to use double quotes you need to apply two levels of escaping, one for the shell and one for grep. The result: "\\\\".
When you write '\\' in single quotes, the shell does not do any interpretation, which means grep receives the string \\ with both backslashes intact. Grep interprets this as an escaped backslash, so it searches the file for a literal backslash character.
If that's not clear, we can use echo to see exactly what the shell is doing. echo doesn't do any backslash interpretation itself, so what it prints is what the shell passed to it.
$ echo "\\"
\
$ echo '\\'
\\
| https://stackoverflow.com//questions/20342464/when-grep-xxfile-i-got-trailing-backslash |
linux - shell-init: error retrieving current directory: getcwd -- The usual fixes do not wor | I have a simple script: |
I believe the error is not related to the script at all. The issue is: the directory at which you are when you try to run the script does not exist anymore. for example you have two terminals, cd somedir/ at the first one then mv somedir/ somewhere_else/ at the second one, then try to run whatsoever in the first terminal - you'll receive this error message.
Please note you'll get this error even if you re-create directory with the same name because the new directory will have different inode index.
At least this was in my case.
| https://stackoverflow.com//questions/29396928/shell-init-error-retrieving-current-directory-getcwd-the-usual-fixes-do-not |
c - Turn a simple socket into an SSL socket | I wrote simple C programs, which are using sockets ('client' and 'server').
(UNIX/Linux usage) |
There are several steps when using OpenSSL. You must have an SSL certificate made which can contain the certificate with the private key be sure to specify the exact location of the certificate (this example has it in the root). There are a lot of good tutorials out there.
Some documentation and tools from HP (see chapter 2)
Command line for OpenSSL
Some includes:
#include <openssl/applink.c>
#include <openssl/bio.h>
#include <openssl/ssl.h>
#include <openssl/err.h>
You will need to initialize OpenSSL:
void InitializeSSL()
{
SSL_load_error_strings();
SSL_library_init();
OpenSSL_add_all_algorithms();
}
void DestroySSL()
{
ERR_free_strings();
EVP_cleanup();
}
void ShutdownSSL()
{
SSL_shutdown(cSSL);
SSL_free(cSSL);
}
Now for the bulk of the functionality. You may want to add a while loop on connections.
int sockfd, newsockfd;
SSL_CTX *sslctx;
SSL *cSSL;
InitializeSSL();
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd< 0)
{
//Log and Error
return;
}
struct sockaddr_in saiServerAddress;
bzero((char *) &saiServerAddress, sizeof(saiServerAddress));
saiServerAddress.sin_family = AF_INET;
saiServerAddress.sin_addr.s_addr = serv_addr;
saiServerAddress.sin_port = htons(aPortNumber);
bind(sockfd, (struct sockaddr *) &serv_addr, sizeof(serv_addr));
listen(sockfd,5);
newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &clilen);
sslctx = SSL_CTX_new( SSLv23_server_method());
SSL_CTX_set_options(sslctx, SSL_OP_SINGLE_DH_USE);
int use_cert = SSL_CTX_use_certificate_file(sslctx, "/serverCertificate.pem" , SSL_FILETYPE_PEM);
int use_prv = SSL_CTX_use_PrivateKey_file(sslctx, "/serverCertificate.pem", SSL_FILETYPE_PEM);
cSSL = SSL_new(sslctx);
SSL_set_fd(cSSL, newsockfd );
//Here is the SSL Accept portion. Now all reads and writes must use SSL
ssl_err = SSL_accept(cSSL);
if(ssl_err <= 0)
{
//Error occurred, log and close down ssl
ShutdownSSL();
}
You are then able read or write using:
SSL_read(cSSL, (char *)charBuffer, nBytesToRead);
SSL_write(cSSL, "Hi :3\n", 6);
Update
The SSL_CTX_new should be called with the TLS method that best fits your needs in order to support the newer versions of security, instead of SSLv23_server_method(). See:
OpenSSL SSL_CTX_new description
TLS_method(), TLS_server_method(), TLS_client_method().
These are the general-purpose version-flexible SSL/TLS methods. The actual protocol version used will be negotiated to the highest version mutually supported by the client and the server. The supported protocols are SSLv3, TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3.
| https://stackoverflow.com//questions/7698488/turn-a-simple-socket-into-an-ssl-socket |
linux - PGP: Not enough random bytes available. Please do some other work to give the OS a chance to collect more entropy | Setup : Ubuntu Server on Virtual Machine with 6 cores and 3GB of RAM. |
Run the following:
find / > /dev/null
That helped me quickly to complete my key generation.
| https://stackoverflow.com//questions/11708334/pgp-not-enough-random-bytes-available-please-do-some-other-work-to-give-the-os |
linux - FUSE error: Transport endpoint is not connected | I'm trying to implement the FUSE filesystem. I am receiving this error: |
This typically is caused by the mount directory being left mounted due to a crash of your filesystem. Go to the parent directory of the mount point and enter fusermount -u YOUR_MNT_DIR.
If this doesn't do the trick, do sudo umount -l YOUR_MNT_DIR.
| https://stackoverflow.com//questions/16002539/fuse-error-transport-endpoint-is-not-connected |
linux - How to find files modified in last x minutes (find -mmin does not work as expected) | I'm trying to find files modified in last x minutes, for example in the last hour. Many forums and tutorials on the net suggest to use the find command with the -mmin option, like this: |
I can reproduce your problem if there are no files in the directory that were modified in the last hour. In that case, find . -mmin -60 returns nothing. The command find . -mmin -60 |xargs ls -l, however, returns every file in the directory which is consistent with what happens when ls -l is run without an argument.
To make sure that ls -l is only run when a file is found, try:
find . -mmin -60 -type f -exec ls -l {} +
| https://stackoverflow.com//questions/33407344/how-to-find-files-modified-in-last-x-minutes-find-mmin-does-not-work-as-expect |
linux - Hourly rotation of files using logrotate? | This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. |
The manpage of logrotate.conf contains an important advice for the hourly option:
Log files are rotated every hour. Note that usually logrotate is configured to be run by cron daily. You have to change this configuration and run logrotate hourly to be able to really rotate logs hourly.
As pointed out by yellow1pl the solution is to copy the file /etc/cron.daily/logrotate into the /etc/cron.hourly/ directory. This works at least for Debian and possibly some Debian derivates.
| https://stackoverflow.com//questions/25485047/hourly-rotation-of-files-using-logrotate |
linux - make -j 8 g++: internal compiler error: Killed (program cc1plus) | When I deploy Apache Mesos on Ubuntu12.04, I follow the official document, in step "make -j 8" I'm getting this error in the console: |
Try running (just after the failure) dmesg.
Do you see a line like this?
Out of memory: Kill process 23747 (cc1plus) score 15 or sacrifice child
Killed process 23747, UID 2243, (cc1plus) total-vm:214456kB, anon-rss:178936kB, file-rss:5908kB
Most likely that is your problem. Running make -j 8 runs lots of process which use more memory. The problem above occurs when your system runs out of memory. In this case rather than the whole system falling over, the operating systems runs a process to score each process on the system. The one that scores the highest gets killed by the operating system to free up memory. If the process that is killed is cc1plus, gcc (perhaps incorrectly) interprets this as the process crashing and hence assumes that it must be a compiler bug. But it isn't really, the problem is the OS killed cc1plus, rather than it crashed.
If this is the case, you are running out of memory. So run perhaps make -j 4 instead. This will mean fewer parallel jobs and will mean the compilation will take longer but hopefully will not exhaust your system memory.
| https://stackoverflow.com//questions/30887143/make-j-8-g-internal-compiler-error-killed-program-cc1plus |
linux - Difference between Real User ID, Effective User ID and Saved User ID | I am already aware of the real user id. It is the unique number for a user in the system. |
The distinction between a real and an effective user id is made because you may have the need to temporarily take another user's identity (most of the time, that would be root, but it could be any user). If you only had one user id, then there would be no way of changing back to your original user id afterwards (other than taking your word for granted, and in case you are root, using root's privileges to change to any user).
So, the real user id is who you really are (the one who owns the process), and the effective user id is what the operating system looks at to make a decision whether or not you are allowed to do something (most of the time, there are some exceptions).
When you log in, the login shell sets both the real and effective user id to the same value (your real user id) as supplied by the password file.
Now, it also happens that you execute a setuid program, and besides running as another user (e.g. root) the setuid program is also supposed to do something on your behalf. How does this work?
After executing the setuid program, it will have your real id (since you're the process owner) and the effective user id of the file owner (for example root) since it is setuid.
The program does whatever magic it needs to do with superuser privileges and then wants to do something on your behalf. That means, attempting to do something that you shouldn't be able to do should fail. How does it do that? Well, obviously by changing its effective user id to the real user id!
Now that setuid program has no way of switching back since all the kernel knows is your id and... your id. Bang, you're dead.
This is what the saved set-user id is for.
| https://stackoverflow.com//questions/32455684/difference-between-real-user-id-effective-user-id-and-saved-user-id |
Best way to find os name and version in Unix/Linux platform | I need to find the OS name and version on Unix/Linux platform. For this I tried following: |
This work fine for all Linux environment.
#!/bin/sh
cat /etc/*-release
In Ubuntu:
$ cat /etc/*-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=10.04
DISTRIB_CODENAME=lucid
DISTRIB_DESCRIPTION="Ubuntu 10.04.4 LTS"
or 12.04:
$ cat /etc/*-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.04
DISTRIB_CODENAME=precise
DISTRIB_DESCRIPTION="Ubuntu 12.04.4 LTS"
NAME="Ubuntu"
VERSION="12.04.4 LTS, Precise Pangolin"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu precise (12.04.4 LTS)"
VERSION_ID="12.04"
In RHEL:
$ cat /etc/*-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)
Red Hat Enterprise Linux Server release 6.5 (Santiago)
Or Use this Script:
#!/bin/sh
# Detects which OS and if it is Linux then it will detect which Linux
# Distribution.
OS=`uname -s`
REV=`uname -r`
MACH=`uname -m`
GetVersionFromFile()
{
VERSION=`cat $1 | tr "\n" ' ' | sed s/.*VERSION.*=\ // `
}
if [ "${OS}" = "SunOS" ] ; then
OS=Solaris
ARCH=`uname -p`
OSSTR="${OS} ${REV}(${ARCH} `uname -v`)"
elif [ "${OS}" = "AIX" ] ; then
OSSTR="${OS} `oslevel` (`oslevel -r`)"
elif [ "${OS}" = "Linux" ] ; then
KERNEL=`uname -r`
if [ -f /etc/redhat-release ] ; then
DIST='RedHat'
PSUEDONAME=`cat /etc/redhat-release | sed s/.*\(// | sed s/\)//`
REV=`cat /etc/redhat-release | sed s/.*release\ // | sed s/\ .*//`
elif [ -f /etc/SuSE-release ] ; then
DIST=`cat /etc/SuSE-release | tr "\n" ' '| sed s/VERSION.*//`
REV=`cat /etc/SuSE-release | tr "\n" ' ' | sed s/.*=\ //`
elif [ -f /etc/mandrake-release ] ; then
DIST='Mandrake'
PSUEDONAME=`cat /etc/mandrake-release | sed s/.*\(// | sed s/\)//`
REV=`cat /etc/mandrake-release | sed s/.*release\ // | sed s/\ .*//`
elif [ -f /etc/debian_version ] ; then
DIST="Debian `cat /etc/debian_version`"
REV=""
fi
if [ -f /etc/UnitedLinux-release ] ; then
DIST="${DIST}[`cat /etc/UnitedLinux-release | tr "\n" ' ' | sed s/VERSION.*//`]"
fi
OSSTR="${OS} ${DIST} ${REV}(${PSUEDONAME} ${KERNEL} ${MACH})"
fi
echo ${OSSTR}
| https://stackoverflow.com//questions/26988262/best-way-to-find-os-name-and-version-in-unix-linux-platform |
How to append one file to another in Linux from the shell? | I have two files: file1 and file2. How do I append the contents of file2 to file1 so that contents of file1 persist the process? |
Use bash builtin redirection (tldp):
cat file2 >> file1
| https://stackoverflow.com//questions/4969641/how-to-append-one-file-to-another-in-linux-from-the-shell |
linux - How to have the cp command create any necessary folders for copying a file to a destination | When copying a file using cp to a folder that may or may not exist, how do I get cp to create the folder if necessary? Here is what I have tried:
|
To expand upon Christian's answer, the only reliable way to do this would be to combine mkdir and cp:
mkdir -p /foo/bar && cp myfile "$_"
As an aside, when you only need to create a single directory in an existing hierarchy, rsync can do it in one operation. I'm quite a fan of rsync as a much more versatile cp replacement, in fact:
rsync -a myfile /foo/bar/ # works if /foo exists but /foo/bar doesn't. bar is created.
| https://stackoverflow.com//questions/947954/how-to-have-the-cp-command-create-any-necessary-folders-for-copying-a-file-to-a |
linux - How to search and replace using grep | I need to recursively search for a specified string within all files and subdirectories within a directory and replace this string with another string. |
Another option is to use find and then pass it through sed.
find /path/to/files -type f -exec sed -i 's/oldstring/new string/g' {} \;
| https://stackoverflow.com//questions/15402770/how-to-search-and-replace-using-grep |
java - adb devices => no permissions (user in plugdev group; are your udev rules wrong?) | I am getting following error log if I connect my android phone with Android Oreo OS to Linux PC |
Check device vendor id and product id:
$ lsusb
Bus 001 Device 002: ID 8087:8000 Intel Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 002 Device 078: ID 138a:0011 Validity Sensors, Inc. VFS5011 Fingerprint Reader
Bus 002 Device 003: ID 8087:07dc Intel Corp.
Bus 002 Device 002: ID 5986:0652 Acer, Inc
Bus 002 Device 081: ID 22b8:2e81 Motorola PCS
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Here my android device is Motorola PCS. So my vid=22b8 and pid=2e81.
Now create a udev rule:
$ sudo vi /etc/udev/rules.d/51-android.rules
SUBSYSTEM=="usb", ATTR{idVendor}=="22b8", ATTR{idProduct}=="2e81", MODE="0666", GROUP="plugdev"
Now the device is good to be detected once udev rule is reloaded. So, let's do it:
$ sudo udevadm control --reload-rules
After this, again check if your device is detected by adb:
$ adb devices
List of devices attached
ZF6222Q9D9 device
So, you are done.
If it still doesn't work, unplug/replug the device.
If it still doesn't work, restart your OS.
| https://stackoverflow.com//questions/53887322/adb-devices-no-permissions-user-in-plugdev-group-are-your-udev-rules-wrong |
linux - ssh script returns 255 error | In my code I have the following to run a remote script. |
This is usually happens when the remote is down/unavailable; or the remote machine doesn't have ssh installed; or a firewall doesn't allow a connection to be established to the remote host.
ssh returns 255 when an error occurred or 255 is returned by the remote script:
EXIT STATUS
ssh exits with the exit status of the remote command or
with 255 if an error occurred.
Usually you would an error message something similar to:
ssh: connect to host host.domain.com port 22: No route to host
Or
ssh: connect to host HOSTNAME port 22: Connection refused
Check-list:
What happens if you run the ssh command directly from the command line?
Are you able to ping that machine?
Does the remote has ssh installed?
If installed, then is the ssh service running?
| https://stackoverflow.com//questions/14885748/ssh-script-returns-255-error |
c++ - How to Add Linux Executable Files to .gitignore? | How do you add Linux executable files to .gitignore without giving them an explicit extension and without placing them in a specific or /bin directory? Most are named the same as the C file from which they were compiled without the .c extension. |
Can you ignore all, but source code files?
For example:
*
!*.c
!Makefile
| https://stackoverflow.com//questions/8237645/how-to-add-linux-executable-files-to-gitignore |
linux - counting number of directories in a specific directory | How to count the number of folders in a specific directory. I am using the following command, but it always provides an extra one. |
find is also printing the directory itself:
$ find .vim/ -maxdepth 1 -type d
.vim/
.vim/indent
.vim/colors
.vim/doc
.vim/after
.vim/autoload
.vim/compiler
.vim/plugin
.vim/syntax
.vim/ftplugin
.vim/bundle
.vim/ftdetect
You can instead test the directory's children and do not descend into them at all:
$ find .vim/* -maxdepth 0 -type d
.vim/after
.vim/autoload
.vim/bundle
.vim/colors
.vim/compiler
.vim/doc
.vim/ftdetect
.vim/ftplugin
.vim/indent
.vim/plugin
.vim/syntax
$ find .vim/* -maxdepth 0 -type d | wc -l
11
$ find .vim/ -maxdepth 1 -type d | wc -l
12
You can also use ls:
$ ls -l .vim | grep -c ^d
11
$ ls -l .vim
total 52
drwxrwxr-x 3 anossovp anossovp 4096 Aug 29 2012 after
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 autoload
drwxrwxr-x 13 anossovp anossovp 4096 Aug 29 2012 bundle
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 colors
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 compiler
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 doc
-rw-rw-r-- 1 anossovp anossovp 48 Aug 29 2012 filetype.vim
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 ftdetect
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 ftplugin
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 indent
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 plugin
-rw-rw-r-- 1 anossovp anossovp 2505 Aug 29 2012 README.rst
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 syntax
$ ls -l .vim | grep ^d
drwxrwxr-x 3 anossovp anossovp 4096 Aug 29 2012 after
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 autoload
drwxrwxr-x 13 anossovp anossovp 4096 Aug 29 2012 bundle
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 colors
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 compiler
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 doc
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 ftdetect
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 ftplugin
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 indent
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 plugin
drwxrwxr-x 2 anossovp anossovp 4096 Aug 29 2012 syntax
| https://stackoverflow.com//questions/17648033/counting-number-of-directories-in-a-specific-directory |
linux - How to find files modified in last x minutes (find -mmin does not work as expected) | I'm trying to find files modified in last x minutes, for example in the last hour. Many forums and tutorials on the net suggest to use the find command with the -mmin option, like this: |
I can reproduce your problem if there are no files in the directory that were modified in the last hour. In that case, find . -mmin -60 returns nothing. The command find . -mmin -60 |xargs ls -l, however, returns every file in the directory which is consistent with what happens when ls -l is run without an argument.
To make sure that ls -l is only run when a file is found, try:
find . -mmin -60 -type f -exec ls -l {} +
| https://stackoverflow.com//questions/33407344/how-to-find-files-modified-in-last-x-minutes-find-mmin-does-not-work-as-expect |
linux - How to write stdout to file with colors? | A lot of times (not always) the stdout is displayed in colors. Normally I keep every output log in a different file too. Naturally in the file, the colors are not displayed anymore. |
Since many programs will only output color sequences if their stdout is a terminal, a general solution to this problem requires tricking them into believing that the pipe they write to is a terminal. This is possible with the script command from bsdutils:
script -q -c "vagrant up" filename.txt
This will write the output from vagrant up to filename.txt (and the terminal). If echoing is not desirable,
script -q -c "vagrant up" filename > /dev/null
will write it only to the file.
| https://stackoverflow.com//questions/27397865/how-to-write-stdout-to-file-with-colors |
performance - Threads vs Processes in Linux | Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. |
Linux uses a 1-1 threading model, with (to the kernel) no distinction between processes and threads -- everything is simply a runnable task. *
On Linux, the system call clone clones a task, with a configurable level of sharing, among which are:
CLONE_FILES: share the same file descriptor table (instead of creating a copy)
CLONE_PARENT: don't set up a parent-child relationship between the new task and the old (otherwise, child's getppid() = parent's getpid())
CLONE_VM: share the same memory space (instead of creating a COW copy)
fork() calls clone(least sharing) and pthread_create() calls clone(most sharing). **
forking costs a tiny bit more than pthread_createing because of copying tables and creating COW mappings for memory, but the Linux kernel developers have tried (and succeeded) at minimizing those costs.
Switching between tasks, if they share the same memory space and various tables, will be a tiny bit cheaper than if they aren't shared, because the data may already be loaded in cache. However, switching tasks is still very fast even if nothing is shared -- this is something else that Linux kernel developers try to ensure (and succeed at ensuring).
In fact, if you are on a multi-processor system, not sharing may actually be beneficial to performance: if each task is running on a different processor, synchronizing shared memory is expensive.
* Simplified. CLONE_THREAD causes signals delivery to be shared (which needs CLONE_SIGHAND, which shares the signal handler table).
** Simplified. There exist both SYS_fork and SYS_clone syscalls, but in the kernel, the sys_fork and sys_clone are both very thin wrappers around the same do_fork function, which itself is a thin wrapper around copy_process. Yes, the terms process, thread, and task are used rather interchangeably in the Linux kernel...
| https://stackoverflow.com//questions/807506/threads-vs-processes-in-linux |
web services - HTTP POST and GET using cURL in Linux | I have a server application written in ASP.NET on Windows that provides a web service. |
*nix provides a nice little command which makes our lives a lot easier.
GET:
with JSON:
curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://hostname/resource
with XML:
curl -H "Accept: application/xml" -H "Content-Type: application/xml" -X GET http://hostname/resource
POST:
For posting data:
curl --data "param1=value1¶m2=value2" http://hostname/resource
For file upload:
curl --form "[email protected]" http://hostname/resource
RESTful HTTP Post:
curl -X POST -d @filename http://hostname/resource
For logging into a site (auth):
curl -d "username=admin&password=admin&submit=Login" --dump-header headers http://localhost/Login
curl -L -b headers http://localhost/
Pretty-printing the curl results:
For JSON:
If you use npm and nodejs, you can install json package by running this command:
npm install -g json
Usage:
curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://hostname/resource | json
If you use pip and python, you can install pjson package by running this command:
pip install pjson
Usage:
curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://hostname/resource | pjson
If you use Python 2.6+, json tool is bundled within.
Usage:
curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://hostname/resource | python -m json.tool
If you use gem and ruby, you can install colorful_json package by running this command:
gem install colorful_json
Usage:
curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://hostname/resource | cjson
If you use apt-get (aptitude package manager of your Linux distro), you can install yajl-tools package by running this command:
sudo apt-get install yajl-tools
Usage:
curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://hostname/resource | json_reformat
For XML:
If you use *nix with Debian/Gnome envrionment, install libxml2-utils:
sudo apt-get install libxml2-utils
Usage:
curl -H "Accept: application/xml" -H "Content-Type: application/xml" -X GET http://hostname/resource | xmllint --format -
or install tidy:
sudo apt-get install tidy
Usage:
curl -H "Accept: application/xml" -H "Content-Type: application/xml" -X GET http://hostname/resource | tidy -xml -i -
Saving the curl response to a file
curl http://hostname/resource >> /path/to/your/file
or
curl http://hostname/resource -o /path/to/your/file
For detailed description of the curl command, hit:
man curl
For details about options/switches of the curl command, hit:
curl -h
| https://stackoverflow.com//questions/14978411/http-post-and-get-using-curl-in-linux |
c - Is it possible to determine the thread holding a mutex? | Firstly, I use pthread library to write multithreading C programs. Threads always hung by their waited mutexes. When I use the strace utility to find a thread in the FUTEX_WAIT status, I want to know which thread holds that mutex at that time. But I don't know how I could I do it. Are there any utilities that could do that? |
You can use knowledge of the mutex internals to do this. Ordinarily this wouldn't be a very good idea, but it's fine for debugging.
Under Linux with the NPTL implementation of pthreads (which is any modern glibc), you can examine the __data.__owner member of the pthread_mutex_t structure to find out the thread that currently has it locked. This is how to do it after attaching to the process with gdb:
(gdb) thread 2
[Switching to thread 2 (Thread 0xb6d94b90 (LWP 22026))]#0 0xb771f424 in __kernel_vsyscall ()
(gdb) bt
#0 0xb771f424 in __kernel_vsyscall ()
#1 0xb76fec99 in __lll_lock_wait () from /lib/i686/cmov/libpthread.so.0
#2 0xb76fa0c4 in _L_lock_89 () from /lib/i686/cmov/libpthread.so.0
#3 0xb76f99f2 in pthread_mutex_lock () from /lib/i686/cmov/libpthread.so.0
#4 0x080484a6 in thread (x=0x0) at mutex_owner.c:8
#5 0xb76f84c0 in start_thread () from /lib/i686/cmov/libpthread.so.0
#6 0xb767784e in clone () from /lib/i686/cmov/libc.so.6
(gdb) up 4
#4 0x080484a6 in thread (x=0x0) at mutex_owner.c:8
8 pthread_mutex_lock(&mutex);
(gdb) print mutex.__data.__owner
$1 = 22025
(gdb)
(I switch to the hung thread; do a backtrace to find the pthread_mutex_lock() it's stuck on; change stack frames to find out the name of the mutex that it's trying to lock; then print the owner of that mutex). This tells me that the thread with LWP ID 22025 is the culprit.
You can then use thread find 22025 to find out the gdb thread number for that thread and switch to it.
| https://stackoverflow.com//questions/3483094/is-it-possible-to-determine-the-thread-holding-a-mutex |
Comparing Unix/Linux IPC | Lots of IPCs are offered by Unix/Linux: pipes, sockets, shared memory, dbus, message-queues... |
Unix IPC
Here are the big seven:
Pipe
Useful only among processes related as parent/child. Call pipe(2) and fork(2). Unidirectional.
FIFO, or named pipe
Two unrelated processes can use FIFO unlike plain pipe. Call mkfifo(3). Unidirectional.
Socket and Unix Domain Socket
Bidirectional. Meant for network communication, but can be used locally too. Can be used for different protocol. There's no message boundary for TCP. Call socket(2).
Message Queue
OS maintains discrete message. See sys/msg.h.
Signal
Signal sends an integer to another process. Doesn't mesh well with multi-threads. Call kill(2).
Semaphore
A synchronization mechanism for multi processes or threads, similar to a queue of people waiting for bathroom. See sys/sem.h.
Shared memory
Do your own concurrency control. Call shmget(2).
Message Boundary issue
One determining factor when choosing one method over the other is the message boundary issue. You may expect "messages" to be discrete from each other, but it's not for byte streams like TCP or Pipe.
Consider a pair of echo client and server. The client sends string, the server receives it and sends it right back. Suppose the client sends "Hello", "Hello", and "How about an answer?".
With byte stream protocols, the server can receive as "Hell", "oHelloHow", and " about an answer?"; or more realistically "HelloHelloHow about an answer?". The server has no clue where the message boundary is.
An age old trick is to limit the message length to CHAR_MAX or UINT_MAX and agree to send the message length first in char or uint. So, if you are at the receiving side, you have to read the message length first. This also implies that only one thread should be doing the message reading at a time.
With discrete protocols like UDP or message queues, you don't have to worry about this issue, but programmatically byte streams are easier to deal with because they behave like files and stdin/out.
| https://stackoverflow.com//questions/404604/comparing-unix-linux-ipc |
python - How to activate virtualenv in Linux? | I have been searching and tried various alternatives without success and spent several days on it now - driving me mad. |
Here is my workflow after creating a folder and cd'ing into it:
$ virtualenv venv --distribute
New python executable in venv/bin/python
Installing distribute.........done.
Installing pip................done.
$ source venv/bin/activate
(venv)$ python
| https://stackoverflow.com//questions/14604699/how-to-activate-virtualenv-in-linux |
node.js - MongoError: connect ECONNREFUSED 127.0.0.1:27017 | I'm using NodeJS wih MongoDB using mongodb package. When I run mongod command it works fine and gives "waiting for connection on port 27017". So, mongod seems to be working. But MongoClient does not work and gives error when I run node index.js command- |
This happened probably because the MongoDB service isn't started. Follow the below steps to start it:
Go to Control Panel and click on Administrative Tools.
Double click on Services. A new window opens up.
Search MongoDB.exe. Right click on it and select Start.
The server will start. Now execute npm start again and the code might work this time.
| https://stackoverflow.com//questions/46523321/mongoerror-connect-econnrefused-127-0-0-127017 |
linux - How can I open some ports on Ubuntu? | I know a little about Linux. Today I created a VPN server on my Ubuntu installation according to Set up a simple IPsec/L2TP VPN server for Ubuntu, Arch Linux and Debian. |
Ubuntu these days comes with UFW - Uncomplicated Firewall. UFW is an easy-to-use method of handling iptables rules.
Try using this command to allow a port:
sudo ufw allow 1701
To test connectivity, you could try shutting down the VPN software (freeing up the ports) and using netcat to listen, like this:
nc -l 1701
Then use telnet from your Windows host and see what shows up on your Ubuntu terminal. This can be repeated for each port you'd like to test.
| https://stackoverflow.com//questions/30251889/how-can-i-open-some-ports-on-ubuntu |
linux - How to convert Windows end of line in Unix end of line (CR/LF to LF) | I'm a Java developer and I'm using Ubuntu to develop. The project was created in Windows with Eclipse and it's using the Windows-1252 encoding. |
There should be a program called dos2unix that will fix line endings for you. If it's not already on your Linux box, it should be available via the package manager.
| https://stackoverflow.com//questions/3891076/how-to-convert-windows-end-of-line-in-unix-end-of-line-cr-lf-to-lf |
c - How to capture Control+D signal? | I want to capture the Ctrl+D signal in my program and write a signal handler for it.
How can I do that?
I am working on C and using a Linux system. |
As others have already said, to handle Control+D, handle "end of file"s.
Control+D is a piece of communication between the user and the pseudo-file that you see as stdin. It does not mean specifically "end of file", but more generally "flush the input I typed so far". Flushing means that any read() call on stdin in your program returns with the length of the input typed since the last flush. If the line is nonempty, the input becomes available to your program although the user did not type "return" yet. If the line is empty, then read() returns with zero, and that is interpreted as "end of file".
So when using Control+D to end a program, it only works at the beginning of a line, or if you do it twice (first time to flush, second time for read() to return zero).
Try it:
$ cat
foo
(type Control-D once)
foofoo (read has returned "foo")
(type Control-D again)
$
| https://stackoverflow.com//questions/1516122/how-to-capture-controld-signal |
python - What is different between makedirs and mkdir of os? | I am confused to use about these two osmethods to create the new directory. |
makedirs() creates all the intermediate directories if they don't exist (just like mkdir -p in bash).
mkdir() can create a single sub-directory, and will throw an exception if intermediate directories that don't exist are specified.
Either can be used to create a single 'leaf' directory (dirA):
os.mkdir('dirA')
os.makedirs('dirA')
But makedirs must be used to create 'branches':
os.makedirs('dirA/dirB') will work [the entire structure is created]
mkdir can work here if dirA already exists, but if it doesn't an error will be thrown.
Note that unlike mkdir -p in bash, either will fail if the leaf already exists.
| https://stackoverflow.com//questions/13819496/what-is-different-between-makedirs-and-mkdir-of-os |
linux - Why doesn't a shell get variables exported by a script run in a subshell? | I have two scripts 1.sh and 2.sh. |
If you are executing your files like sh 1.sh or ./1.sh Then you are executing it in a sub-shell.
If you want the changes to be made in your current shell, you could do:
. 1.sh
# OR
source 1.sh
Please consider going through the reference-documentation.
"When a script is run using source [or .] it runs within the existing shell, any variables created or modified by the script will remain available after the script completes. In contrast if the script is run just as filename, then a separate subshell (with a completely separate set of variables) would be spawned to run the script."
| https://stackoverflow.com//questions/10781824/why-doesnt-a-shell-get-variables-exported-by-a-script-run-in-a-subshell |
ios - Starting iPhone app development in Linux? | We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations. |
To provide a differing response, I'm running OS X and Xcode on a virtualised (VMware) machine on Linux. CPU is a Core2Quad (Q8800), and it is perfectly fast. I found a prebuilt VM online (I'll leave it to you to find)
Xcode/iPhone development works perfectly, as does debugging via USB to the phone itself.
It actually surprised me a lot - but I've had no issues at all.
| https://stackoverflow.com//questions/276907/starting-iphone-app-development-in-linux |
linux - Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running? | I have applied every solution available on internet but still I cannot run Docker. |
You can try out this:
systemctl start docker
It worked fine for me.
P.S.: after if there is commands that you can't do without sudo, try this:
gpasswd -a $USER docker
| https://stackoverflow.com//questions/44678725/cannot-connect-to-the-docker-daemon-at-unix-var-run-docker-sock-is-the-docker |
linux - How to check if X server is running? | Is there any way to find out if the current session user is running an Xserver (under Linux) ? |
I often need to run an X command on a server that is running many X servers, so the ps based answers do not work. Naturally, $DISPLAY has to be set appropriately. To check that that is valid, use xset q in some fragment like:
if ! xset q &>/dev/null; then
echo "No X server at \$DISPLAY [$DISPLAY]" >&2
exit 1
fi
EDIT
Some people find that xset can pause for a annoying amount of time before deciding that $DISPLAY is not pointing at a valid X server (often when tcp/ip is the transport). The fix of course is to use timeout to keep the pause amenable, 1 second say.
if ! timeout 1s xset q &>/dev/null; then
⋮
| https://stackoverflow.com//questions/637005/how-to-check-if-x-server-is-running |
linux - Recursively look for files with a specific extension | I'm trying to find all files with a specific extension in a directory and its subdirectories with my bash (Latest Ubuntu LTS Release). |
find "$directory" -type f -name "*.in"
is a bit shorter than that whole thing (and safer - deals with whitespace in filenames and directory names).
Your script is probably failing for entries that don't have a . in their name, making $extension empty.
| https://stackoverflow.com//questions/5927369/recursively-look-for-files-with-a-specific-extension |
gzip - Extract and delete all .gz in a directory- Linux | I have a directory. It has about 500K .gz files. |
This should do it:
gunzip *.gz
| https://stackoverflow.com//questions/16038087/extract-and-delete-all-gz-in-a-directory-linux |
linux - How to run a process with a timeout in Bash? | Possible Duplicate:
Bash script that kills a child process after a given timeout |
Use the timeout command:
timeout 15s command
Note: on some systems you need to install coreutils, on others it's missing or has different command line arguments. See an alternate solution posted by @ArjunShankar . Based on it you can encapsulate that boiler-plate code and create your own portable timeout script or small C app that does the same thing.
| https://stackoverflow.com//questions/10224939/how-to-run-a-process-with-a-timeout-in-bash |
linux - How set multiple env variables for a bash command | I am supposed to set the EC2_HOME and JAVA_HOME variables
before running a command (ec2-describe-regions) |
You can one-time set vars for a single command by putting them on the command line before the command:
$ EC2_HOME=/path/to/dir JAVA_HOME=/other/path ec2-describe-regions
Alternately, you can export them in the environment, in which case they'll be set for all future commands:
$ export EC2_HOME=/path/to/dir
$ export JAVA_HOME=/other/path
$ ec2-describe-regions
| https://stackoverflow.com//questions/26189662/how-set-multiple-env-variables-for-a-bash-command |
Can WPF applications be run in Linux or Mac with .Net Core 3? | Microsoft announced .NET Core 3 comes with WPF and Windows Forms. So can I create a desktop application for Linux or Mac using .NET Core 3? |
No, they have clearly stated that these are windows only. In one of the .NET Core 3.0 discussions, they have also clarified that they do not intend to make these features cross-platform in the future since the whole concept is derived from windows specific features. They talked about thinking of a whole new idea for cross-platform applications, which is not easy.
Source: https://youtu.be/HNLZQeu05BY
Update
The newly announced .NET 5 now aims in avoiding all this confusion by no longer calling it ".NET Core".
Update 2
With blazor client-side (releases on may, 2020), there is a new experimental project for cross-platform apps using webview that is in the works.
Source:
https://blog.stevensanderson.com/2019/11/01/exploring-lighter-alternatives-to-electron-for-hosting-a-blazor-desktop-app/
| https://stackoverflow.com//questions/53954047/can-wpf-applications-be-run-in-linux-or-mac-with-net-core-3 |
linux - Terminal Multiplexer for Microsoft Windows - Installers for GNU Screen or tmux | This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. |
Look. This is way old, but on the off chance that someone from Google finds this, absolutely the best solution to this - (and it is AWESOME) - is to use ConEmu (or a package that includes and is built on top of ConEmu called cmder) and then either use plink or putty itself to connect to a specific machine, or, even better, set up a development environment as a local VM using Vagrant.
This is the only way I can ever see myself developing from a Windows box again.
I am confident enough to say that every other answer - while not necessarily bad answers - offer garbage solutions compared to this.
Update: As Of 1/8/2020 not all other solutions are garbage - Windows Terminal is getting there and WSL exists.
| https://stackoverflow.com//questions/5473384/terminal-multiplexer-for-microsoft-windows-installers-for-gnu-screen-or-tmux |
linux - How to resume interrupted download automatically in curl? | I'm working with curl in Linux. I'm downloading a part of a file in ftp server (using the -r option), but my connection is not good, it always interrupts. I want to write a script which resume download when I'm connected again. |
curl -L -O your_url
This will download the file.
Now let's say your connection is interrupted;
curl -L -O -C - your_url
This will continue downloading from the last byte downloaded
From the manpage:
Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out.
| https://stackoverflow.com//questions/19728930/how-to-resume-interrupted-download-automatically-in-curl |
linux - tar: add all files and directories in current directory INCLUDING .svn and so on | I try to tar.gz a directory and use |
Don't create the tar file in the directory you are packing up:
tar -czf /tmp/workspace.tar.gz .
does the trick, except it will extract the files all over the current directory when you unpack. Better to do:
cd ..
tar -czf workspace.tar.gz workspace
or, if you don't know the name of the directory you were in:
base=$(basename $PWD)
cd ..
tar -czf $base.tar.gz $base
(This assumes that you didn't follow symlinks to get to where you are and that the shell doesn't try to second guess you by jumping backwards through a symlink - bash is not trustworthy in this respect. If you have to worry about that, use cd -P .. to do a physical change directory. Stupid that it is not the default behaviour in my view - confusing, at least, for those for whom cd .. never had any alternative meaning.)
One comment in the discussion says:
I [...] need to exclude the top directory and I [...] need to place the tar in the base directory.
The first part of the comment does not make much sense - if the tar file contains the current directory, it won't be created when you extract file from that archive because, by definition, the current directory already exists (except in very weird circumstances).
The second part of the comment can be dealt with in one of two ways:
Either: create the file somewhere else - /tmp is one possible location - and then move it back to the original location after it is complete.
Or: if you are using GNU Tar, use the --exclude=workspace.tar.gz option. The string after the = is a pattern - the example is the simplest pattern - an exact match. You might need to specify --exclude=./workspace.tar.gz if you are working in the current directory contrary to recommendations; you might need to specify --exclude=workspace/workspace.tar.gz if you are working up one level as suggested. If you have multiple tar files to exclude, use '*', as in --exclude=./*.gz.
| https://stackoverflow.com//questions/3651791/tar-add-all-files-and-directories-in-current-directory-including-svn-and-so-on |
linux - How to check if X server is running? | Is there any way to find out if the current session user is running an Xserver (under Linux) ? |
I often need to run an X command on a server that is running many X servers, so the ps based answers do not work. Naturally, $DISPLAY has to be set appropriately. To check that that is valid, use xset q in some fragment like:
if ! xset q &>/dev/null; then
echo "No X server at \$DISPLAY [$DISPLAY]" >&2
exit 1
fi
EDIT
Some people find that xset can pause for a annoying amount of time before deciding that $DISPLAY is not pointing at a valid X server (often when tcp/ip is the transport). The fix of course is to use timeout to keep the pause amenable, 1 second say.
if ! timeout 1s xset q &>/dev/null; then
⋮
| https://stackoverflow.com//questions/637005/how-to-check-if-x-server-is-running |
linux - what does "bash:no job control in this shell” mean? | I think it's related to the parent process creating new subprocess and does not have tty. Can anyone explain the detail under the hood? i.e. the related working model of bash, process creation, etc? |
You may need to enable job control:
#! /bin/bash
set -m
| https://stackoverflow.com//questions/11821378/what-does-bashno-job-control-in-this-shell-mean |
user interface - Graphical DIFF programs for linux | I really like Araxis Merge for a graphical DIFF program for the PC. I have no idea what's available for linux, though. We're running SUSE linux on our z800 mainframe.
I'd be most grateful if I could get a few pointers to what programs everyone else likes. |
I know of two graphical diff programs: Meld and KDiff3. I haven't used KDiff3, but Meld works well for me.
It seems that both are in the standard package repositories for openSUSE 11.0
| https://stackoverflow.com//questions/112932/graphical-diff-programs-for-linux |
linux - Curl Command to Repeat URL Request | Whats the syntax for a linux command that hits a URL repeatedly, x number of times. I don't need to do anything with the data, I just need to replicate hitting refresh 20 times in a browser. |
You could use URL sequence substitution with a dummy query string (if you want to use CURL and save a few keystrokes):
curl http://www.myurl.com/?[1-20]
If you have other query strings in your URL, assign the sequence to a throwaway variable:
curl http://www.myurl.com/?myVar=111&fakeVar=[1-20]
Check out the URL section on the man page: https://curl.haxx.se/docs/manpage.html
| https://stackoverflow.com//questions/12409519/curl-command-to-repeat-url-request |
Replace whitespaces with tabs in linux | How do I replace whitespaces with tabs in linux in a given text file? |
Use the unexpand(1) program
UNEXPAND(1) User Commands UNEXPAND(1)
NAME
unexpand - convert spaces to tabs
SYNOPSIS
unexpand [OPTION]... [FILE]...
DESCRIPTION
Convert blanks in each FILE to tabs, writing to standard output. With
no FILE, or when FILE is -, read standard input.
Mandatory arguments to long options are mandatory for short options
too.
-a, --all
convert all blanks, instead of just initial blanks
--first-only
convert only leading sequences of blanks (overrides -a)
-t, --tabs=N
have tabs N characters apart instead of 8 (enables -a)
-t, --tabs=LIST
use comma separated LIST of tab positions (enables -a)
--help display this help and exit
--version
output version information and exit
. . .
STANDARDS
The expand and unexpand utilities conform to IEEE Std 1003.1-2001
(``POSIX.1'').
| https://stackoverflow.com//questions/1424126/replace-whitespaces-with-tabs-in-linux |
linux - PGP: Not enough random bytes available. Please do some other work to give the OS a chance to collect more entropy | Setup : Ubuntu Server on Virtual Machine with 6 cores and 3GB of RAM. |
Run the following:
find / > /dev/null
That helped me quickly to complete my key generation.
| https://stackoverflow.com//questions/11708334/pgp-not-enough-random-bytes-available-please-do-some-other-work-to-give-the-os |
linux - ssh script returns 255 error | In my code I have the following to run a remote script. |
This is usually happens when the remote is down/unavailable; or the remote machine doesn't have ssh installed; or a firewall doesn't allow a connection to be established to the remote host.
ssh returns 255 when an error occurred or 255 is returned by the remote script:
EXIT STATUS
ssh exits with the exit status of the remote command or
with 255 if an error occurred.
Usually you would an error message something similar to:
ssh: connect to host host.domain.com port 22: No route to host
Or
ssh: connect to host HOSTNAME port 22: Connection refused
Check-list:
What happens if you run the ssh command directly from the command line?
Are you able to ping that machine?
Does the remote has ssh installed?
If installed, then is the ssh service running?
| https://stackoverflow.com//questions/14885748/ssh-script-returns-255-error |
linux - Can you attach Amazon EBS to multiple instances? | We currently use multiple webservers accessing one mysql server and fileserver. Looking at moving to the cloud, can I use this same setup and attach the EBS to multiple machine instances or what's another solution? |
UPDATE (April 2015): For this use-case, you should start looking at the new Amazon Elastic File System (EFS), which is designed to be multiply attached in exactly the way you are wanting. The key difference between EFS and EBS is that they provide different abstractions: EFS exposes the NFSv4 protocol, whereas EBS provides raw block IO access.
Below you'll find my original explanation as to why it's not possible to safely mount a raw block device on multiple machines.
ORIGINAL POST (2011):
Even if you were able to get an EBS volume attached to more than one instance, it would be a _REALLY_BAD_IDEA_. To quote Kekoa, "this is like using a hard drive in two computers at once"
Why is this a bad idea? ...
The reason you can't attach a volume to more than one instance is that EBS provides a "block storage" abstraction upon which customers run a filesystem like ext2/ext3/etc. Most of these filesystems (eg, ext2/3, FAT, NTFS, etc) are written assuming they have exclusive access to the block device. Two instances accessing the same filesystem would almost certainly end in tears and data corruption.
In other words, double mounting an EBS volume would only work if you were running a cluster filesystem that is designed to share a block device between multiple machines. Furthermore, even this wouldn't be enough. EBS would need to be tested for this scenario and to ensure that it provides the same consistency guarantees as other shared block device solutions ... ie, that blocks aren't cached at intermediate non-shared levels like the Dom0 kernel, Xen layer, and DomU kernel. And then there's the performance considerations of synchronizing blocks between multiple clients - most of the clustered filesystems are designed to work on high speed dedicated SANs, not a best-effort commodity ethernet. It sounds so simple, but what you are asking for is a very nontrivial thing.
Alternatively, see if your data sharing scenario can be NFS, SMB/CIFS, SimpleDB, or S3. These solutions all use higher layer protocols that are intended to share files without having a shared block device subsystem. Many times such a solution is actually more efficient.
In your case, you can still have a single MySql instance / fileserver that is accessed by multiple web front-ends. That fileserver could then store it's data on an EBS volume, allowing you to take nightly snapshot backups. If the instance running the fileserver is lost, you can detach the EBS volume and reattach it to a new fileserver instance and be back up and running in minutes.
"Is there anything like S3 as a filesystem?" - yes and no. Yes, there are 3rd party solutions like s3fs that work "ok", but under the hood they still have to make relatively expensive web service calls for each read / write. For a shared tools dir, works great. For the kind of clustered FS usage you see in the HPC world, not a chance. To do better, you'd need a new service that provides a binary connection-oriented protocol, like NFS. Offering such a multi-mounted filesystem with reasonable performance and behavior would be a GREAT feature add-on for EC2. I've long been an advocate for Amazon to build something like that.
| https://stackoverflow.com//questions/841240/can-you-attach-amazon-ebs-to-multiple-instances |
linux - "/usr/bin/ld: cannot find -lz" | I am trying to compile Android source code under Ubuntu 10.04. I get an error saying, |
I had the exact same error, and like you, installing zlib1g-dev did not fix it. Installing lib32z1-dev got me past it. I have a 64 bit system and it seems like it wanted the 32 bit library.
| https://stackoverflow.com//questions/3373995/usr-bin-ld-cannot-find-lz |
linux - What are stalled-cycles-frontend and stalled-cycles-backend in 'perf stat' result? | Does anybody know what is the meaning of stalled-cycles-frontend and stalled-cycles-backend in perf stat result ? I searched on the internet but did not find the answer. Thanks |
The theory:
Let's start from this: nowaday's CPU's are superscalar, which means that they can execute more than one instruction per cycle (IPC). Latest Intel architectures can go up to 4 IPC (4 x86 instruction decoders). Let's not bring macro / micro fusion into discussion to complicate things more :).
Typically, workloads do not reach IPC=4 due to various resource contentions. This means that the CPU is wasting cycles (number of instructions is given by the software and the CPU has to execute them in as few cycles as possible).
We can divide the total cycles being spent by the CPU in 3 categories:
Cycles where instructions get retired (useful work)
Cycles being spent in the Back-End (wasted)
Cycles spent in the Front-End (wasted).
To get an IPC of 4, the number of cycles retiring has to be close to the total number of cycles. Keep in mind that in this stage, all the micro-operations (uOps) retire from the pipeline and commit their results into registers / caches. At this stage you can have even more than 4 uOps retiring, because this number is given by the number of execution ports. If you have only 25% of the cycles retiring 4 uOps then you will have an overall IPC of 1.
The cycles stalled in the back-end are a waste because the CPU has to wait for resources (usually memory) or to finish long latency instructions (e.g. transcedentals - sqrt, reciprocals, divisions, etc.).
The cycles stalled in the front-end are a waste because that means that the Front-End does not feed the Back End with micro-operations. This can mean that you have misses in the Instruction cache, or complex instructions that are not already decoded in the micro-op cache. Just-in-time compiled code usually expresses this behavior.
Another stall reason is branch prediction miss. That is called bad speculation. In that case uOps are issued but they are discarded because the BP predicted wrong.
The implementation in profilers:
How do you interpret the BE and FE stalled cycles?
Different profilers have different approaches on these metrics. In vTune, categories 1 to 3 add up to give 100% of the cycles. That seams reasonable because either you have your CPU stalled (no uOps are retiring) either it performs usefull work (uOps) retiring. See more here: https://software.intel.com/sites/products/documentation/doclib/stdxe/2013SP1/amplifierxe/snb/index.htm
In perf this usually does not happen. That's a problem because when you see 125% cycles stalled in the front end, you don't know how to really interpret this. You could link the >1 metric with the fact that there are 4 decoders but if you continue the reasoning, then the IPC won't match.
Even better, you don't know how big the problem is. 125% out of what? What do the #cycles mean then?
I personally look a bit suspicious on perf's BE and FE stalled cycles and hope this will get fixed.
Probably we will get the final answer by debugging the code from here: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/perf/builtin-stat.c
| https://stackoverflow.com//questions/22165299/what-are-stalled-cycles-frontend-and-stalled-cycles-backend-in-perf-stat-resul |
I get "dquote>" as a result of executing a program in linux shell | When I execute a script in a Linux shell, I get this output: |
It means you've executed a line of code with only one double-quote character, like this:
echo "Hello
The shell is waiting for the other quote.
| https://stackoverflow.com//questions/15773278/i-get-dquote-as-a-result-of-executing-a-program-in-linux-shell |
How to set the environmental variable LD_LIBRARY_PATH in linux | I have first executed the command: export LD_LIBRARY_PATH=/usr/local/lib |
You should add more details about your distribution, for example under Ubuntu the right way to do this is to add a custom .conf file to /etc/ld.so.conf.d, for example
sudo gedit /etc/ld.so.conf.d/randomLibs.conf
inside the file you are supposed to write the complete path to the directory that contains all the libraries that you wish to add to the system, for example
/home/linux/myLocalLibs
remember to add only the path to the dir, not the full path for the file, all the libs inside that path will be automatically indexed.
Save and run sudo ldconfig to update the system with this libs.
| https://stackoverflow.com//questions/13428910/how-to-set-the-environmental-variable-ld-library-path-in-linux |
linux - Difference between checkout and export in SVN | What is the exact difference between SVN checkout and SVN export? |
svn export simply extracts all the files from a revision and does not allow revision control on it. It also does not litter each directory with .svn directories.
svn checkout allows you to use version control in the directory made, e.g. your standard commands such as svn update and svn commit.
| https://stackoverflow.com//questions/419467/difference-between-checkout-and-export-in-svn |
c - Whole one core dedicated to single process | Is there any way in Linux to assign one CPU core to a particular given process and there should not be any other processes or interrupt handlers to be scheduled on this core? |
Yes there is. In fact, there are two separate ways to do it :-)
Right now, the best way to accomplish what you want is to do the following:
Add the parameter isolcpus=[cpu_number] to the Linux kernel command line from the boot loader during boot. This will instruct the Linux scheduler not to run any regular tasks on that CPU unless specifically requested using cpu affinity.
Use IRQ affinity to set other CPUs to handle all interrupts so that your isolated CPU will not receive any interrupts.
Use CPU affinity to fix your specific task to the isolated CPU.
This will give you the best that Linux can provide with regard to CPU isolation without out-of-tree and in-development patches.
Your task will still get interrupted from time to time by Linux code, including other tasks - such as the timer tick interrupt and the scheduler code, IPIs from other CPUs and stuff like work queue kernel threads, although the interruption should be quite minimal.
For an (almost) complete list of interruption sources, check out my page at https://github.com/gby/linux/wiki
The alternative method is to use cpusets which is way more elegant and dynamic but suffers from some weaknesses at this point in time (no migration of timers for example) which makes me recommend the old, crude but effective isolcpus parameter.
Note that work is currently being done by the Linux community to address all these issues and more to give even better isolation.
| https://stackoverflow.com//questions/13583146/whole-one-core-dedicated-to-single-process |
linux - Python subprocess.Popen "OSError: [Errno 12] Cannot allocate memory" | Note: This question was originally asked here but the bounty time expired even though an acceptable answer was not actually found. I am re-asking this question including all details provided in the original question. |
As a general rule (i.e. in vanilla kernels), fork/clone failures with ENOMEM occur specifically because of either an honest to God out-of-memory condition (dup_mm, dup_task_struct, alloc_pid, mpol_dup, mm_init etc. croak), or because security_vm_enough_memory_mm failed you while enforcing the overcommit policy.
Start by checking the vmsize of the process that failed to fork, at the time of the fork attempt, and then compare to the amount of free memory (physical and swap) as it relates to the overcommit policy (plug the numbers in.)
In your particular case, note that Virtuozzo has additional checks in overcommit enforcement. Moreover, I'm not sure how much control you truly have, from within your container, over swap and overcommit configuration (in order to influence the outcome of the enforcement.)
Now, in order to actually move forward I'd say you're left with two options:
switch to a larger instance, or
put some coding effort into more effectively controlling your script's memory footprint
NOTE that the coding effort may be all for naught if it turns out that it's not you, but some other guy collocated in a different instance on the same server as you running amock.
Memory-wise, we already know that subprocess.Popen uses fork/clone under the hood, meaning that every time you call it you're requesting once more as much memory as Python is already eating up, i.e. in the hundreds of additional MB, all in order to then exec a puny 10kB executable such as free or ps. In the case of an unfavourable overcommit policy, you'll soon see ENOMEM.
Alternatives to fork that do not have this parent page tables etc. copy problem are vfork and posix_spawn. But if you do not feel like rewriting chunks of subprocess.Popen in terms of vfork/posix_spawn, consider using suprocess.Popen only once, at the beginning of your script (when Python's memory footprint is minimal), to spawn a shell script that then runs free/ps/sleep and whatever else in a loop parallel to your script; poll the script's output or read it synchronously, possibly from a separate thread if you have other stuff to take care of asynchronously -- do your data crunching in Python but leave the forking to the subordinate process.
HOWEVER, in your particular case you can skip invoking ps and free altogether; that information is readily available to you in Python directly from procfs, whether you choose to access it yourself or via existing libraries and/or packages. If ps and free were the only utilities you were running, then you can do away with subprocess.Popen completely.
Finally, whatever you do as far as subprocess.Popen is concerned, if your script leaks memory you will still hit the wall eventually. Keep an eye on it, and check for memory leaks.
| https://stackoverflow.com//questions/1367373/python-subprocess-popen-oserror-errno-12-cannot-allocate-memory |
linux - Tell Composer to use Different PHP Version | I've been stuck at this for a few days. I'm using 1and1 hosting, and they have their PHP set up a bit weird. |
Ubuntu 18.04 case ... this run for me.
/usr/bin/php7.1 /usr/local/bin/composer update
| https://stackoverflow.com//questions/32750250/tell-composer-to-use-different-php-version |
linux - Identify user in a Bash script called by sudo | If I create the script /root/bin/whoami.sh containing: |
$SUDO_USER doesn't work if you are using sudo su -.
It also requires multiple checks - if $USER == 'root' then get $SUDO_USER.
Instead of the command whoami use who am i. This runs the who command filtered for the current session. It gives you more info than you need. So, do this to get just the user:
who am i | awk '{print $1}'
Alternatively (and simpler) you can use logname. It does the same thing as the above statement.
This gives you the username that logged in to the session.
These work regardless of sudo or sudo su [whatever]. It also works regardless of how many times su and sudo are called.
| https://stackoverflow.com//questions/3522341/identify-user-in-a-bash-script-called-by-sudo |
terminal - How do you scroll up/down on the console of a Linux VM | This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. |
SHIFT+Page Up and SHIFT+Page Down. If it doesn't work try this and then it should:
Go the terminal program, and make sure
Edit/Profile Preferences/Scrolling/Scrollback/Unlimited
is checked.
The exact location of this option might be somewhere different though, I see that you are using Redhat.
| https://stackoverflow.com//questions/15255070/how-do-you-scroll-up-down-on-the-console-of-a-linux-vm |
sql server - Error: TCP Provider: Error code 0x2746. During the Sql setup in linux through terminal | I am trying to setup the ms-sql server in my linux by following the documentation
https://learn.microsoft.com/pl-pl/sql/linux/quickstart-install-connect-ubuntu?view=sql-server-2017 |
[UPDATE 17.03.2020: Microsoft has released SQL Server 2019 CU3 with an Ubuntu 18.04 repository. See: https://techcommunity.microsoft.com/t5/sql-server/sql-server-2019-now-available-on-ubuntu-18-04-supported-on-sles/ba-p/1232210 . I hope this is now fully compatible without any ssl problems. Haven't tested it jet.]
Reverting to 14.0.3192.2-2 helps.
But it's possible to solve the problem also using the method indicated by Ola774, not only in case of upgrade from Ubuntu 16.04 to 18.04, but on every installation of SQL Server 2017 on Ubuntu 18.04.
It seems that Microsoft now in cu16 messed up with their own patch for the ssl-version problems applied in cu10 (https://techcommunity.microsoft.com/t5/SQL-Server/Installing-SQL-Server-2017-for-Linux-on-Ubuntu-18-04-LTS/ba-p/385983). But linking the ssl 1.0.0 libraries works.
So just do the following:
Stop SQL Server
sudo systemctl stop mssql-server
Open the editor for the service configuration by
sudo systemctl edit mssql-server
This will create an override for the original service config. It's correct that the override-file, or, more exactly "drop-in-file", is empty when used the first time.
In the editor, add the following lines to the file and save it:
[Service]
Environment="LD_LIBRARY_PATH=/opt/mssql/lib"
Create symbolic links to OpenSSL 1.0 for SQL Server to use:
sudo ln -s /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 /opt/mssql/lib/libssl.so
sudo ln -s /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /opt/mssql/lib/libcrypto.so
Start SQL Server
sudo systemctl start mssql-server
| https://stackoverflow.com//questions/57265913/error-tcp-provider-error-code-0x2746-during-the-sql-setup-in-linux-through-te |