text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Visual Studio Setup Project - Per User Registry Settings I'm trying to maintain a Setup Project in Visual Studio 2003 (yes, it's a legacy application). The problem we have at the moment is that we need to write registry entries to HKCU for every user on the computer. They need to be in the HKCU rather than HKLM because they are the default user settings, and they do change per user. My feeling is that
*
*This isn't possible
*This isn't something the installer should be doing, but something the application should be doing (after all what happens when a user profile is created after the install?).
With that in mind, I still want to change as little as possible in the application, so my question is, is it possible to add registry entries for every user in a Visual Studio 2003 setup project?
And, at the moment the project lists five registry root keys (HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE, HKEY_USERS, and User/Machine Hive). I don't really know anything about the Users root key, and haven't seen User/Machine Hive. Can anyone enlighten me on what they are? Perhaps they could solve my problem above.
A: First: Yes, this is something that belongs in the Application for the exact reson you specified: What happens after new user profiles are created? Sure, if you're using a domain it's possible to have some stuff put in the registry on creation, but this is not really a use case. The Application should check if there are seetings and use the default settings if not.
That being said, it IS possible to change other users Keys through the HKEY_USERS Hive.
I have no experience with the Visual Studio 2003 Setup Project, so here is a bit of (totally unrelated) VBScript code that might just give you an idea where to look:
const HKEY_USERS = &H80000003
strComputer = "."
Set objReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & strComputer & "\root\default:StdRegProv")
strKeyPath = ""
objReg.EnumKey HKEY_USERS, strKeyPath, arrSubKeys
strKeyPath = "\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing"
For Each subkey In arrSubKeys
objReg.SetDWORDValue HKEY_USERS, subkey & strKeyPath, "State", 146944
Next
(Code Courtesy of Jeroen Ritmeijer)
A: I'm guessing that because you want to set it for all users, that you're on some kind of shared computer, which is probably running under a domain?
HERE BE DRAGONS
Let's say Joe and Jane regularly log onto the computer, then they will each have 'registries'.
You'll then install your app, and the installer will employ giant hacks and disgusting things to set items under HKCU for them.
THEN, bob will come along and log on (he, and 500 other people have accounts in the domain and so can do this). He's never used this computer before, so he has no registry. The first time he logs in, windows creates him one, but he won't have your setting.
Your app then falls over or behaves incorrectly, and bob complains loudly about those crappy products from raynixon incorporated.
The correct answer is to just have some default settings in your app, which can write them to the registry if it doesn't find them. It's general good practice that your app should never depend on the registry, and should create things as needed, for any registry entry, not just HKCU, anyway
A: I'm partway to my solution with this entry on MSDN (don't know how I couldn't find it before).
User/Machine Hive
Subkeys and values entered under this hive will be installed under the HKEY_CURRENT_USER hive when a user chooses "Just Me" or the HKEY_USERS hive or when a user chooses "Everyone" during installation.
Registry Editor Archive of MSDN Article
A: Despite what the MSDN article Archive of MSDN Article says about User/Machine Hive, it doesn't write to HKEY_USERS. Rather it writes to HKCU if you select Just Me and HKLM if you select everyone.
So my solution is going to be to use the User/Machine Hive, and then in the application it checks if the registry entries are in HKCU and if not, copies them from HKLM. I know this probably isn't the most ideal way of doing it, but it has the least amount of changes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Efficiently get sorted sums of a sorted list You have an ascending list of numbers, what is the most efficient algorithm you can think of to get the ascending list of sums of every two numbers in that list. Duplicates in the resulting list are irrelevant, you can remove them or avoid them if you like.
To be clear, I'm interested in the algorithm. Feel free to post code in any language and paradigm that you like.
A: Rather than coding this out, I figure I'll pseudo-code it in steps and explain my logic, so that better programmers can poke holes in my logic if necessary.
On the first step we start out with a list of numbers length n. For each number we need to create a list of length n-1 becuase we aren't adding a number to itself. By the end we have a list of about n sorted lists that was generated in O(n^2) time.
step 1 (startinglist)
for each number num1 in startinglist
for each number num2 in startinglist
add num1 plus num2 into templist
add templist to sumlist
return sumlist
In step 2 because the lists were sorted by design (add a number to each element in a sorted list and the list will still be sorted) we can simply do a mergesort by merging each list together rather than mergesorting the whole lot. In the end this should take O(n^2) time.
step 2 (sumlist)
create an empty list mergedlist
for each list templist in sumlist
set mergelist equal to: merge(mergedlist,templist)
return mergedlist
The merge method would be then the normal merge step with a check to make sure that there are no duplicate sums. I won't write this out because anyone can look up mergesort.
So there's my solution. The entire algorithm is O(n^2) time. Feel free to point out any mistakes or improvements.
A: You can do this in two lines in python with
allSums = set(a+b for a in X for b in X)
allSums = sorted(allSums)
The cost of this is n^2 (maybe an extra log factor for the set?) for the iteration and s * log(s) for the sorting where s is the size of the set.
The size of the set could be as big as n*(n-1)/2 for example if X = [1,2,4,...,2^n]. So if you want to generate this list it will take at least n^2/2 in the worst case since this is the size of the output.
However if you want to select the first k elements of the result you can do this in O(kn) using a selection algorithm for sorted X+Y matrices by Frederickson and Johnson (see here for gory details). Although this can probably be modified to generate them online by reusing computation and get an efficient generator for this set.
@deuseldorf, Peter
There is some confusion about (n!) I seriously doubt deuseldorf meant "n factorial" but simply "n, (very excited)!"
A: Edit as of 2018: You should probably stop reading this. (But I can't delete it as it is accepted.)
If you write out the sums like this:
1 4 5 6 8 9
---------------
2 5 6 7 9 10
8 9 10 12 13
10 11 13 14
12 14 15
16 17
18
You'll notice that since M[i,j] <= M[i,j+1] and M[i,j] <= M[i+1,j], then you only need to examine the top left "corners" and choose the lowest one.
e.g.
*
*only 1 top left corner, pick 2
*only 1, pick 5
*6 or 8, pick 6
*7 or 8, pick 7
*9 or 8, pick 8
*9 or 9, pick both :)
*10 or 10 or 10, pick all
*12 or 11, pick 11
*12 or 12, pick both
*13 or 13, pick both
*14 or 14, pick both
*15 or 16, pick 15
*only 1, pick 16
*only 1, pick 17
*only 1, pick 18
Of course, when you have lots of top left corners then this solution devolves.
I'm pretty sure this problem is Ω(n²), because you have to calculate the sums for each M[i,j] -- unless someone has a better algorithm for the summation :)
A: The best I could come up with is to produce a matrix of sums of each pair, and then merge the rows together, a-la merge sort. I feel like I'm missing some simple insight that will reveal a much more efficient solution.
My algorithm, in Haskell:
matrixOfSums list = [[a+b | b <- list, b >= a] | a <- list]
sortedSums = foldl merge [] matrixOfSums
--A normal merge, save that we remove duplicates
merge xs [] = xs
merge [] ys = ys
merge (x:xs) (y:ys) = case compare x y of
LT -> x:(merge xs (y:ys))
EQ -> x:(merge xs (dropWhile (==x) ys))
GT -> y:(merge (x:xs) ys)
I found a minor improvement, one that's more amenable to lazy stream-based coding. Instead of merging the columns pair-wise, merge all of them at once. The advantage being that you start getting elements of the list immediately.
-- wide-merge does a standard merge (ala merge-sort) across an arbitrary number of lists
-- wideNubMerge does this while eliminating duplicates
wideNubMerge :: Ord a => [[a]] -> [a]
wideNubMerge ls = wideNubMerge1 $ filter (/= []) ls
wideNubMerge1 [] = []
wideNubMerge1 ls = mini:(wideNubMerge rest)
where mini = minimum $ map head ls
rest = map (dropWhile (== mini)) ls
betterSortedSums = wideNubMerge matrixOfSums
However, if you know you're going to use all of the sums, and there's no advantage to getting some of them earlier, go with 'foldl merge []', as it's faster.
A: In SQL:
create table numbers(n int not null)
insert into numbers(n) values(1),(1), (2), (2), (3), (4)
select distinct num1.n+num2.n sum2n
from numbers num1
inner join numbers num2
on num1.n<>num2.n
order by sum2n
C# LINQ:
List<int> num = new List<int>{ 1, 1, 2, 2, 3, 4};
var uNum = num.Distinct().ToList();
var sums=(from num1 in uNum
from num2 in uNum
where num1!=num2
select num1+num2).Distinct();
foreach (var s in sums)
{
Console.WriteLine(s);
}
A: This question has been wracking my brain for about a day now. Awesome.
Anyways, you can't get away from the n^2 nature of it easily, but you can do slightly better with the merge since you can bound the range to insert each element in.
If you look at all the lists you generate, they have the following form:
(a[i], a[j]) | j>=i
If you flip it 90 degrees, you get:
(a[i], a[j]) | i<=j
Now, the merge process should be taking two lists i and i+1 (which correspond to lists where the first member is always a[i] and a[i+1]), you can bound the range to insert element (a[i + 1], a[j]) into list i by the location of (a[i], a[j]) and the location of (a[i + 1], a[j + 1]).
This means that you should merge in reverse in terms of j. I don't know (yet) if you can leverage this across j as well, but it seems possible.
A: No matter what you do, without additional constraints on the input values, you cannot do better than O(n^2), simply because you have to iterate through all pairs of numbers. The iteration will dominate sorting (which you can do in O(n log n) or faster).
A: If you are looking for a truly language agnostic solution then you will be sorely disappointed in my opinion because you'll be stuck with a for loop and some conditionals. However if you opened it up to functional languages or functional language features (I'm looking at you LINQ) then my colleagues here can fill this page with elegant examples in Ruby, Lisp, Erlang, and others.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: How do I most elegantly express left join with aggregate SQL as LINQ query SQL:
SELECT
u.id,
u.name,
isnull(MAX(h.dateCol), '1900-01-01') dateColWithDefault
FROM universe u
LEFT JOIN history h
ON u.id=h.id
AND h.dateCol<GETDATE()-1
GROUP BY u.Id, u.name
A: A solution, albeit one that defers handling of the null value to the code, could be:
DateTime yesterday = DateTime.Now.Date.AddDays(-1);
var collection=
from u in db.Universe
select new
{
u.id,
u.name,
MaxDate =(DateTime?)
(
from h in db.History
where u.Id == h.Id
&& h.dateCol < yesterday
select h.dateCol
).Max()
};
This does not produce exactly the same SQL, but does provide the same logical result. Translating "complex" SQL queries to LINQ is not always straightforward.
A: var collection=
from u in db.Universe
select new
{
u.id,
u.name,
MaxDate =(DateTime?)
(
from h in db.History
where u.Id == h.Id
&& h.dateCol < yesterday
select h.dateCol
).Max()
};
Just youse the above code and this should work fine!
A: This isn't a full answer for you, but on the left join piece you can use the DefaultIfEmpty operator like so:
var collection =
from u in db.Universe
join history in db.History on u.id = history.id into temp
from h in temp.DefaultIfEmpty()
where h.dateCol < DateTime.Now.Date.AddDays(-1)
select u.id, u.name, h.dateCol ?? '1900-01-01'
I haven't had the need to do any groupby commands yet, so I left that out as to not send you down the wrong path. Two other quick things to note. I have been unable to actually join on two parameters although as above there are ways to get around it. Also, the ?? operator works really well in place of the isnull in SQL.
A: You're going to want to use the join into construct to create a group query.
TestContext db = new TestContext(CreateSparqlTripleStore());
var q = from a in db.Album
join t in db.Track on a.Name equals t.AlbumName into tracks
select new Album{Name = a.Name, Tracks = tracks};
foreach(var album in q){
Console.WriteLine(album.Name);
foreach (Track track in album.Tracks)
{
Console.WriteLine(track.Title);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Editing database records by multiple users I have designed database tables (normalised, on an MS SQL server) and created a standalone windows front end for an application that will be used by a handful of users to add and edit information. We will add a web interface to allow searching accross our production area at a later date.
I am concerned that if two users start editing the same record then the last to commit the update would be the 'winner' and important information may be lost. A number of solutions come to mind but I'm not sure if I am going to create a bigger headache.
*
*Do nothing and hope that two users are never going to be editing the same record at the same time. - Might never happed but what if it does?
*Editing routine could store a copy of the original data as well as the updates and then compare when the user has finished editing. If they differ show user and comfirm update - Would require two copies of data to be stored.
*Add last updated DATETIME column and check it matches when we update, if not then show differences. - requires new column in each of the relevant tables.
*Create an editing table that registers when users start editing a record that will be checked and prevent other users from editing same record. - would require carful thought of program flow to prevent deadlocks and records becoming locked if a user crashes out of the program.
Are there any better solutions or should I go for one of these?
A: A classic approach is as follows:
*
*add a boolean field , "locked" to each table.
*set this to false by default.
*when a user starts editing, you do this:
*
*lock the row (or the whole table if you can't lock the row)
*check the flag on the row you want to edit
*if the flag is true then
*
*inform the user that they cannot edit that row at the moment
*else
*
*set the flag to true
*release the lock
*when saving the record, set the flag back to false
A: @ Mark Harrison : SQL Server does not support that syntax (SELECT ... FOR UPDATE).
The SQL Server equivalent is the SELECT statement hint UPDLOCK.
See SQL Server Books Online for more information.
A: -first create filed (update time) to store last update record
-when any user select record save select time,
compare between select time and update time field if( update time) > (select time) that mean another user update this record after select record
A: If you expect infrequent collisions, Optimistic Concurrency is probably your best bet.
Scott Mitchell wrote a comprehensive tutorial on implementing that pattern:
Implementing Optimistic Concurrency
A: Another option is to test that the values in the record that you are changing are the still the same as they were when you started:
SELECT
customer_nm,
customer_nm AS customer_nm_orig
FROM demo_customer
WHERE customer_id = @p_customer_id
(display the customer_nm field and the user changes it)
UPDATE demo_customer
SET customer_nm = @p_customer_name_new
WHERE customer_id = @p_customer_id
AND customer_name = @p_customer_nm_old
IF @@ROWCOUNT = 0
RAISERROR( 'Update failed: Data changed' );
You don't have to add a new column to your table (and keep it up to date), but you do have to create more verbose SQL statements and pass new and old fields to the stored procedure.
It also has the advantage that you are not locking the records - because we all know that records will end up staying locked when they should not be...
A: SELECT FOR UPDATE and equivalents are good providing you hold the lock for a microscopic amount of time, but for a macroscopic amount (e.g. the user has the data loaded and hasn't pressed 'save' you should use optimistic concurrency as above. (Which I always think is misnamed - it's more pessimistic than 'last writer wins', which is usually the only other alternative considered.)
A: The database will do this for you. Look at "select ... for update", which is designed just for this kind of thing. It will give you a write lock on the selected rows, which you can then commit or roll back.
A: With me, the best way i have a column lastupdate (timetamp datatype).
when select and update just compare this value
another advance of this solution is that you can use this column to track down the time data has change.
I think it is not good if you just create a colum like isLock for check update.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: CruiseControl.net, msbuild, /p:OutputPath and CCNetArtifactDirectory I'm trying to setup CruiseControl.net at the moment. So far it works nice, but I have a Problem with the MSBuild Task.
According to the Documentation, it passes CCNetArtifactDirectory to MSBuild. But how do I use it?
I tried this:
<buildArgs>
/noconsolelogger /p:OutputPath=$(CCNetArtifactDirectory)\test
</buildArgs>
But that does not work. In fact, it kills the service with this error:
ThoughtWorks.CruiseControl.Core.Config.Preprocessor.EvaluationException: Reference to unknown symbol CCNetArtifactDirectory
Documentation is rather sparse, and google und mainly offers modifying the .sln Project file, which is what I want to avoid in order to be able to manually build this project later - I would really prefer /p:OutputPath.
A: The CCNetArtifactDirectory is passed to the MSBuild by default, so you dont need to worry about it. MSBuild will place the build output in the "bin location" relevant to the working directory that you have specified.
<executable>c:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe</executable>
<workingDirectory>C:\data\projects\FooSolution\</workingDirectory>
<projectFile>FooSolution.sln</projectFile>
<buildArgs>/noconsolelogger /p:Configuration=Debug </buildArgs>
So in the above example your build output will be put in C:\data\projects\FooSolution[ProjectName]\bin\Debug. Should you want to output to a different location you may want to look at of the tag in CCNET.
<publishers>
<xmllogger />
<buildpublisher>
<sourceDir>C:\data\projects\FooSolution\FooProject\bin\Debug</sourceDir>
<publishDir>C:\published\FooSolution\</publishDir>
<useLabelSubDirectory>false</useLabelSubDirectory>
</buildpublisher>
</publishers>
This will allow you to publish your output to a different location.
A: You can use the artifact directory variable inside the MSBuild script itself. Here's an example of how I'm running FxCop right now from my CC.Net MSBuild script (this script is what CC.Net points to - there is also a "Build" target in the script that includes an MSBuild task against the SLN to do the actual compilation):
<Exec
Command='FxCopCmd.exe /project:"$(MSBuildProjectDirectory)\FXCopRules.FxCop" /out:"$(CCNetArtifactDirectory)\ProjectName.FxCop.xml"'
WorkingDirectory="C:\Program Files\Microsoft FxCop 1.35"
ContinueOnError="true"
IgnoreExitCode="true"
/>
A: Parameters like CCNetArtifactDirectory are passed to external programs using environment variables. They are available in the external program but they aren't inside CCNET configuration. This often leads to confusion.
You can use a preprocessor constant instead:
<cb:define project.artifactDirectory="C:\foo">
<project>
<!-- [...] -->
<artifactDirectory>$(project.artifactDirectory)</artifactDirectory>
<!-- [...] -->
<tasks>
<!-- [...] -->
<msbuild>
<!-- [...] -->
<buildArgs>/noconsolelogger /p:OutputPath=$(project.artifactDirectory)\test</buildArgs>
<!-- [...] -->
</msbuild>
<!-- [...] -->
</tasks>
<!-- [...] -->
</project>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How to detect which one of the defined font was used in a web page? Suppose I have the following CSS rule in my page:
body {
font-family: Calibri, Trebuchet MS, Helvetica, sans-serif;
}
How could I detect which one of the defined fonts were used in the user's browser?
For people wondering why I want to do this is because the font I'm detecting contains glyphs that are not available in other fonts. If the user does not have the font, then I want it to display a link asking the user to download that font (so they can use my web application with the correct font).
Currently, I am displaying the download font link for all users. I want to only display this for people who do not have the correct font installed.
A: I've seen it done in a kind of iffy, but pretty reliable way. Basically, an element is set to use a specific font and a string is set to that element. If the font set for the element does not exist, it takes the font of the parent element. So, what they do is measure the width of the rendered string. If it matches what they expected for the desired font as opposed to the derived font, it's present. This won't work for monospaced fonts.
Here's where it came from:
Javascript/CSS Font Detector (ajaxian.com; 12 Mar 2007)
A: Another solution would be to install the font automatically via @font-face which might negate the need for detection.
@font-face {
font-family: "Calibri";
src: url("http://www.yourwebsite.com/fonts/Calibri.eot");
src: local("Calibri"), url("http://www.yourwebsite.com/fonts/Calibri.ttf") format("truetype");
}
Of course it wouldn't solve any copyright issues, however you could always use a freeware font or even make your own font. You will need both .eot & .ttf files to work best.
A: Calibri is a font owned by Microsoft, and shouldn't be distributed for free. Also, requiring a user to download a specific font isn't very user-friendly.
I would suggest purchasing a license for the font and embedding it into your application.
A: I wrote a simple JavaScript tool that you can use it to check if a font is installed or not.
It uses simple technique and should be correct most of the time.
jFont Checker on github
A: I am using Fount. You just have to drag the Fount button to your bookmarks bar, click on it and then click on a specific text on the website. It will then show the font of that text.
https://fount.artequalswork.com/
A: You can use this website :
http://website-font-analyzer.com/
It does exactly what you want...
A: You can put Adobe Blank in the font-family after the font you want to see, and then any glyphs not in that font won't be rendered.
e.g.:
font-family: Arial, 'Adobe Blank';
As far as I'm aware there is no JS method to tell which glyphs in an element are being rendered by which font in the font stack for that element.
This is complicated by the fact that browsers have user settings for serif/sans-serif/monospace fonts and they also have their own hard-coded fall-back fonts that they will use if a glyph is not found in any of the fonts in a font stack. So browser may render some glyphs in a font that is not in the font stack or the user's browser font setting. Chrome Dev Tools will show you each rendered font for the glyphs in the selected element. So on your machine you can see what it's doing, but there's no way to tell what's happening on a user's machine.
It's also possible the user's system may play a part in this as e.g. Window does Font Substitution at the glyph level.
so...
For the glyphs you are interested in, you have no way of knowing whether they will be rendered by the user's browser/system fallback, even if they don't have the font you specify.
If you want to test it in JS you could render individual glyphs with a font-family including Adobe Blank and measure their width to see if it is zero, BUT you'd have to iterate thorough each glyph and each font you wanted to test, but although you can know the fonts in an elements font stack there is no way of knowing what fonts the user's browser is configured to use so for at least some of your users the list of fonts you iterate through will be incomplete. (It is also not future proof if new fonts come out and start getting used.)
A: @pat Actually, Safari does not give the font used, Safari instead always returns the first font in the stack regardless of whether it is installed, at least in my experience.
font-family: "my fake font", helvetica, san-serif;
Assuming Helvetica is the one installed/used, you'll get:
*
*"my fake font" in Safari (and I believe other webkit browsers).
*"my fake font, helvetica, san-serif" in Gecko browsers and IE.
*"helvetica" in Opera 9, though I read that they are changing this in Opera 10 to match
Gecko.
I took a pass at this problem and created Font Unstack, which tests each font in a stack and returns the first installed one only. It uses the trick that @MojoFilter mentions, but only returns the first one if multiple are installed. Though it does suffer from the weakness that @tlrobinson mentions (Windows will substitute Arial for Helvetica silently and report that Helvetica is installed), it otherwise works well.
A: A technique that works is to look at the computed style of the element. This is supported in Opera and Firefox (and I recon in safari, but haven't tested). IE (7 at least), provides a method to get a style, but it seems to be whatever was in the stylesheet, not the computed style. More details on quirksmode: Get Styles
Here's a simple function to grab the font used in an element:
/**
* Get the font used for a given element
* @argument {HTMLElement} the element to check font for
* @returns {string} The name of the used font or null if font could not be detected
*/
function getFontForElement(ele) {
if (ele.currentStyle) { // sort of, but not really, works in IE
return ele.currentStyle["fontFamily"];
} else if (document.defaultView) { // works in Opera and FF
return document.defaultView.getComputedStyle(ele,null).getPropertyValue("font-family");
} else {
return null;
}
}
If the CSS rule for this was:
#fonttester {
font-family: sans-serif, arial, helvetica;
}
Then it should return helvetica if that is installed, if not, arial, and lastly, the name of the system default sans-serif font. Note that the ordering of fonts in your CSS declaration is significant.
An interesting hack you could also try is to create lots of hidden elements with lots of different fonts to try to detect which fonts are installed on a machine. I'm sure someone could make a nifty font statistics gathering page with this technique.
A: A simplified form is:
function getFont() {
return document.getElementById('header').style.font;
}
If you need something more complete, check this out.
A: There is a simple solution - just use element.style.font:
function getUserBrowsersFont() {
var browserHeader = document.getElementById('header');
return browserHeader.style.font;
}
This function will exactly do what you want. On execution It will return the font type of the user/browser. Hope this will help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "156"
} |
Q: .Net XML comment into API Documentation Is there an easy way to produce MSDN-style documentation from the Visual Studio XML output?
I'm not patient enough to set up a good xslt for it because I know I'm not the first person to cross this bridge.
Also, I tried setting up sandcastle recently, but it really made my eyes cross. Either I was missing something important in the process or it is just way too involved.
I know somebody out there has a really nice dead-simple solution.
I'm reiterating here because I think my formatting made that paragraph non-inviting to read:
I gave sandcastle a try but had a really hard time getting it set up.
What I really have in mind is something much simpler.
That is, unless I just don't understand the sandcastle process. It seemed like an awful lot of extra baggage to me just to produce something nice for the testers to work with.
A: Have a look at Sandcastle, which does exactly that. It's also one of the more simpler solutions out there, and it's more or less the tool of choice, so in the long run, maybe we could help you to set up Sandcastle if you specify what issues you encountered during setup?
A: I've just set up Sandcastle again. Try installing it (the May 2008 release) and search for SandcastleGui.exe or something similar (it's in the examples folder or so).
Click Add Assembly and add your Assembly or Assemblies, add any .xml Documentation files (the ones generated by the compiler if you enabled that option) and then Build.
It will take some time, but the result will be worth the effort. It will actually look up stuff from MSDN, so your resulting documentation will also have the Class Inheritance all the way down to System.Object with links to MSDN and stuff.
Sandcastle seems a bit complicated at first, especially when you want to use it in an automated build, but I am absolutely sure it will be worth the effort.
Also have a look at Sandcastle Help File Builder, this is a somewhat more advanced GUI for it.
A: You should also use the Sandcastle Help File Builder. It provides you with a ndoc like GUI for generating help files so you don't have to do anything from a command prompt.
Welcome to the Sandcastle Help File Builder Project
A: Follow this simple 5 step article and you are pretty much done. As a bonus you can use H2Viewer to view Html Help 2.x files.
A: I use NDoc3
A: You're looking for Sandcastle
Project Page: Sandcastle Releases
Blog: Sandcastle Blog
NDoc Code Documentation Generator for .NET used to be the tool of choice, but support has all but stopped.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Why is Git better than Subversion? I've been using Subversion for a few years and after using SourceSafe, I just love Subversion. Combined with TortoiseSVN, I can't really imagine how it could be any better.
Yet there's a growing number of developers claiming that Subversion has problems and that we should be moving to the new breed of distributed version control systems, such as Git.
How does Git improve upon Subversion?
A: Subversion is still a much more used version control system, which means that it has better tool support. You'll find mature SVN plugins for almost any IDE, and there are good explorer extensions available (like TurtoiseSVN). Other than that, I'll have to agree with Michael: Git isn't better or worse than Subversion, it's different.
A: One of the things about SubVersion that irks me is that it puts its own folder in each directory of a project, whereas git only puts one in the root directory. It's not that big of a deal, but little things like that add up.
Of course, SubVersion has Tortoise, which is [usually] very nice.
A: David Richards WANdisco Blog on Subversion / GIT
The emergence of GIT has brought with it a breed of DVCS fundamentalists – the ‘Gitterons’ – that think anything other than GIT is crap. The Gitterons seem to think software engineering happens on their own island and often forget that most organizations don’t employ senior software engineers exclusively. That’s ok but it’s not how the rest of the market thinks, and I am happy to prove it: GIT, at the last look had less than three per cent of the market while Subversion has in the region of five million users and about half of the overall market.
The problem we saw was that the Gitterons were firing (cheap) shots at Subversion. Tweets like “Subversion is so [slow/crappy/restrictive/doesn't smell good/looks at me in a funny way] and now I have GIT and [everything works in my life/my wife got pregnant/I got a girlfriend after 30 years of trying/I won six times running on the blackjack table]. You get the picture.
A: Git also makes branching and merging really easy. Subversion 1.5 just added merge tracking, but Git is still better. With Git branching is very fast and cheap. It makes creating a branch for each new feature more feasible. Oh and Git repositories are very efficient with storage space as compared to Subversion.
A: It's all about the ease of use/steps required to do something.
If I'm developing a single project on my PC/laptop, git is better, because it is far easier to set up and use.
You don't need a server, and you don't need to keep typing repository URL's in when you do merges.
If it were just 2 people, I'd say git is also easier, because you can just push and pull from eachother.
Once you get beyond that though, I'd go for subversion, because at that point you need to set up a 'dedicated' server or location.
You can do this just as well with git as with SVN, but the benefits of git get outweighed by the need to do additional steps to synch with a central server. In SVN you just commit. In git you have to git commit, then git push. The additional step gets annoying simply because you end up doing it so much.
SVN also has the benefit of better GUI tools, however the git ecosystem seems to be catching up quickly, so I wouldn't worry about this in the long term.
A: Easy Git has a nice page comparing actual usage of Git and SVN which will give you an idea of what things Git can do (or do more easily) compared to SVN. (Technically, this is based on Easy Git, which is a lightweight wrapper on top of Git.)
A: "Why Git is Better than X" outlines the various pros and cons of Git vs other SCMs.
Briefly:
*
*Git tracks content rather than files
*Branches are lightweight and merging is easy, and I mean really easy.
*It's distributed, basically every repository is a branch. It's much easier to develop concurrently and collaboratively than with Subversion, in my opinion. It also makes offline development possible.
*It doesn't impose any workflow, as seen on the above linked website, there are many workflows possible with Git. A Subversion-style workflow is easily mimicked.
*Git repositories are much smaller in file size than Subversion repositories. There's only one ".git" directory, as opposed to dozens of ".svn" repositories (note Subversion 1.7 and higher now uses a single directory like Git.)
*The staging area is awesome, it allows you to see the changes you will commit, commit partial changes and do various other stuff.
*Stashing is invaluable when you do "chaotic" development, or simply want to fix a bug while you're still working on something else (on a different branch).
*You can rewrite history, which is great for preparing patch sets and fixing your mistakes (before you publish the commits)
*… and a lot more.
There are some disadvantages:
*
*There aren't many good GUIs for it yet. It's new and Subversion has been around for a lot longer, so this is natural as there are a few interfaces in development. Some good ones include TortoiseGit and GitHub for Mac.
*Partial checkouts/clones of repositories are not possible at the moment (I read that it's in development). However, there is submodule support. Git 1.7+ supports sparse checkouts.
*It might be harder to learn, even though I did not find this to be the case (about a year ago). Git has recently improved its interface and is quite user friendly.
In the most simplistic usage, Subversion and Git are pretty much the same. There isn't much difference between:
svn checkout svn://foo.com/bar bar
cd bar
# edit
svn commit -m "foo"
and
git clone [email protected]:foo/bar.git
cd bar
# edit
git commit -a -m "foo"
git push
Where Git really shines is branching and working with other people.
A: Git is not better than Subversion. But is also not worse. It's different.
The key difference is that it is decentralized. Imagine you are a developer on the road, you develop on your laptop and you want to have source control so that you can go back 3 hours.
With Subversion, you have a Problem: The SVN Repository may be in a location you can't reach (in your company, and you don't have internet at the moment), you cannot commit. If you want to make a copy of your code, you have to literally copy/paste it.
With Git, you do not have this problem. Your local copy is a repository, and you can commit to it and get all benefits of source control. When you regain connectivity to the main repository, you can commit against it.
This looks good at first, but just keep in mind the added complexity to this approach.
Git seems to be the "new, shiny, cool" thing. It's by no means bad (there is a reason Linus wrote it for the Linux Kernel development after all), but I feel that many people jump on the "Distributed Source Control" train just because it's new and is written by Linus Torvalds, without actually knowing why/if it's better.
Subversion has Problems, but so does Git, Mercurial, CVS, TFS or whatever.
Edit: So this answer is now a year old and still generates many upvotes, so I thought I'll add some more explanations. In the last year since writing this, Git has gained a lot of momentum and support, particularly since sites like GitHub really took off. I'm using both Git and Subversion nowadays and I'd like to share some personal insight.
First of all, Git can be really confusing at first when working decentralized. What is a remote? and How to properly set up the initial repository? are two questions that come up at the beginning, especially compared to SVN's simple "svnadmin create", Git's "git init" can take the parameters --bare and --shared which seems to be the "proper" way to set up a centralized repository. There are reasons for this, but it adds complexity. The documentation of the "checkout" command is very confusing to people changing over - the "proper" way seems to be "git clone", while "git checkout" seems to switch branches.
Git REALLY shines when you are decentralized. I have a server at home and a Laptop on the road, and SVN simply doesn't work well here. With SVN, I can't have local source control if I'm not connected to the repository (Yes, I know about SVK or about ways to copy the repo). With Git, that's the default mode anyway. It's an extra command though (git commit commits locally, whereas git push origin master pushes the master branch to the remote named "origin").
As said above: Git adds complexity. Two modes of creating repositories, checkout vs. clone, commit vs. push... You have to know which commands work locally and which work with "the server" (I'm assuming most people still like a central "master-repository").
Also, the tooling is still insufficient, at least on Windows. Yes, there is a Visual Studio AddIn, but I still use git bash with msysgit.
SVN has the advantage that it's MUCH simpler to learn: There is your repository, all changes to towards it, if you know how to create, commit and checkout and you're ready to go and can pickup stuff like branching, update etc. later on.
Git has the advantage that it's MUCH better suited if some developers are not always connected to the master repository. Also, it's much faster than SVN. And from what I hear, branching and merging support is a lot better (which is to be expected, as these are the core reasons it was written).
This also explains why it gains so much buzz on the Internet, as Git is perfectly suited for Open Source projects: Just Fork it, commit your changes to your own Fork, and then ask the original project maintainer to pull your changes. With Git, this just works. Really, try it on Github, it's magic.
What I also see are Git-SVN Bridges: The central repository is a Subversion repo, but developers locally work with Git and the bridge then pushes their changes to SVN.
But even with this lengthy addition, I still stand by my core message: Git is not better or worse, it's just different. If you have the need for "Offline Source Control" and the willingness to spend some extra time learning it, it's fantastic. But if you have a strictly centralized Source Control and/or are struggling to introduce Source Control in the first place because your co-workers are not interested, then the simplicity and excellent tooling (at least on Windows) of SVN shine.
A: Google Tech Talk: Linus Torvalds on git
http://www.youtube.com/watch?v=4XpnKHJAok8
The Git Wiki's comparison page
http://git.or.cz/gitwiki/GitSvnComparsion
A: Git and DVCS in general is great for developers doing a lot of coding independently of each other because everyone has their own branch. If you need a change from someone else, though, she has to commit to her local repo and then she must push that changeset to you or you must pull it from her.
My own reasoning also makes me think DVCS makes things harder for QA and release management if you do things like centralized releases. Someone has to be responsible for doing that push/pull from everyone else's repository, resolving any conflicts that would have been resolved at initial commit time before, then doing the build, and then having all the other developers re-sync their repos.
All of this can be addressed with human processes, of course; DVCS just broke something that was fixed by centralized version control in order to provide some new conveniences.
A: I like Git because it actually helps communication developer to developer on a medium to large team. As a distributed version control system, through its push/pull system, it helps developers to create a source code eco-system which helps to manage a large pool of developers working on a single project.
For example say you trust 5 developers and only pull codes from their repository. Each of those developers has their own trust network from where they pull codes. Thus the development is based on that trust fabric of developers where code responsibility is shared among the development community.
Of course there are other benefits which are mentioned in other answers here.
A: A few answers have alluded to these, but I want to make 2 points explicit:
1) The ability to do selective commits (for example, git add --patch). If your working directory contains multiple changes that are not part of the same logical change, Git makes it very easy to make a commit that includes only a portion of the changes. With Subversion, it is difficult.
2) The ability to commit without making the change public. In Subversion, any commit is immediately public, and thus irrevocable. This greatly limits the ability of the developer to "commit early, commit often".
Git is more than just a VCS; it's also a tool for developing patches. Subversion is merely a VCS.
A: I think Subversion is fine.. until you start merging.. or doing anything complicated.. or doing anything Subversion thinks is complicated (like doing queries to find out which branches messed with a particular file, where a change actually comes from, detecting copy&pastes, etc)...
I disagree with the winning answer, saying the main benefit of GIT is offline work - it's certainly useful, but it's more like an extra for my use case. SVK can work offline too, still there is no question for me which one to invest my learning time in).
It's just that it's incredibly powerful and fast and, well -after getting used to the concepts - very useful (yes, in that sense: user friendly).
For more details on a merging story, see this :
Using git-svn (or similar) *just* to help out with an svn merge?
A: Thanks to the fact it doesn't need to communicate with a central server constantly, pretty much every command runs in less than a second (obviously git push/pull/fetch are slower simply because they have to initalise SSH connections). Branching is far far easier (one simple command to branch, one simple command to merge)
A: I absolutely love being able to manage local branches of my source code in Git without muddying up the water of the central repository. In many cases I'll checkout code from the Subversion server and run a local Git repository just to be able to do this. It's also great that initializing a Git repository doesn't pollute the filesystem with a bunch of annoying .svn folders everywhere.
And as far as Windows tool support, TortoiseGit handles the basics very well, but I still prefer the command line unless I want to view the log. I really like the way Tortoise{Git|SVN} helps when reading commit logs.
A: This is the wrong question to be asking. It's all too easy to focus on git's warts and formulate an argument about why subversion is ostensibly better, at least for some use cases. The fact that git was originally designed as a low-level version control construction set and has a baroque linux-developer-oriented interface makes it easier for the holy wars to gain traction and perceived legitimacy. Git proponents bang the drum with millions of workflow advantages, which svn guys proclaim unnecessary. Pretty soon the whole debate is framed as centralized vs distributed, which serves the interests of the enterprise svn tool community. These companies, which typically put out the most convincing articles about subversion's superiority in the enterprise, are dependent on the perceived insecurity of git and the enterprise-readiness of svn for the long-term success of their products.
But here's the problem: Subversion is an architectural dead-end.
Whereas you can take git and build a centralized subversion replacement quite easily, despite being around for more than twice as long svn has never been able to get even basic merge-tracking working anywhere near as well as it does in git. One basic reason for this is the design decision to make branches the same as directories. I don't know why they went this way originally, it certainly makes partial checkouts very simple. Unfortunately it also makes it impossible to track history properly. Now obviously you are supposed to use subversion repository layout conventions to separate branches from regular directories, and svn uses some heuristics to make things work for the daily use cases. But all this is just papering over a very poor and limiting low-level design decision. Being able to a do a repository-wise diff (rather than directory-wise diff) is basic and critical functionality for a version control system, and greatly simplifies the internals, making it possible to build smarter and useful features on top of it. You can see in the amount of effort that has been put into extending subversion, and yet how far behind it is from the current crop of modern VCSes in terms of fundamental operations like merge resolution.
Now here's my heart-felt and agnostic advice for anyone who still believes Subversion is good enough for the foreseeable future:
Subversion will never catch up to the newer breeds of VCSes that have learned from the mistakes of RCS and CVS; it is a technical impossibility unless they retool the repository model from the ground up, but then it wouldn't really be svn would it? Regardless of how much you think you don't the capabilities of a modern VCS, your ignorance will not protect you from the Subversion's pitfalls, many of which are situations that are impossible or easily resolved in other systems.
It is extremely rare that the technical inferiority of a solution is so clear-cut as it is with svn, certainly I would never state such an opinion about win-vs-linux or emacs-vs-vi, but in this case it is so clearcut, and source control is such a fundamental tool in the developer's arsenal, that I feel it must be stated unequivocally. Regardless of the requirement to use svn for organizational reasons, I implore all svn users not to let their logical mind construct a false belief that more modern VCSes are only useful for large open-source projects. Regardless of the nature of your development work, if you are a programmer, you will be a more effective programmer if you learn how to use better-designed VCSes, whether it be Git, Mercurial, Darcs, or many others.
A: Well, it's distributed. Benchmarks indicate that it's considerably faster (given its distributed nature, operations like diffs and logs are all local so of course it's blazingly faster in this case), and working folders are smaller (which still blows my mind).
When you're working on subversion, or any other client/server revision control system, you essentially create working copies on your machine by checking-out revisions. This represents a snapshot in time of what the repository looks like. You update your working copy via updates, and you update the repository via commits.
With a distributed version control, you don't have a snapshot, but rather the entire codebase. Wanna do a diff with a 3 month old version? No problem, the 3 month old version is still on your computer. This doesn't only mean things are way faster, but if you're disconnected from your central server, you can still do many of the operations you're used to. In other words, you don't just have a snapshot of a given revision, but the entire codebase.
You'd think that Git would take up a bunch of space on your harddrive, but from a couple benchmarks I've seen, it actually takes less. Don't ask me how. I mean, it was built by Linus, he knows a thing or two about filesystems I guess.
A: The main points I like about DVCS are those :
*
*You can commit broken things. It doesn't matter because other peoples won't see them until you publish. Publish time is different of commit time.
*Because of this you can commit more often.
*You can merge complete functionnality. This functionnality will have its own branch. All commits of this branch will be related to this functionnality. You can do it with a CVCS however with DVCS its the default.
*You can search your history (find when a function changed )
*You can undo a pull if someone screw up the main repository, you don't need to fix the errors. Just clear the merge.
*When you need a source control in any directory do : git init . and you can commit, undoing changes, etc...
*It's fast (even on Windows )
The main reason for a relatively big project is the improved communication created by the point 3. Others are nice bonuses.
A: Subversion is very easy to use. I have never found in the last years a problem or that something doesn't work as expected. Also there are many excellent GUI tools and the support for SVN integration is big.
With Git you get a more flexible VCS. You can use it the same way like SVN with a remote repository where you commit all changes. But you can also use it mostly offline and only push the changes from time to time to the remote repository.
But Git is more complex and has a steeper learning curve. I found myself in the first time committing to wrong branches, creating branches indirectly or get error messages with not much informations about the mistake and where I must search with Google to get better informations.
Some easy things like substitution of markers ($Id$) doesn't work but GIT has a very flexible filtering and hook mechanism to merge own scripts and so you get all things you need and more but it needs more time and reading of the documentation ;)
If you work mostly offline with your local repository you have no backup if something is lost on your local machine. With SVN you are mostly working with a remote repository which is also the same time your backup on another server...
Git can work in the same way but this was not the main goal of Linus to have something like SVN2. It was designed for the Linux kernel developers and the needs of a distributed version control system.
Is Git better then SVN? Developers which needs only some version history and a backup mechanism have a good and easy life with SVN. Developers working often with branches, testing more versions at the same time or working mostly offline can benefit from the features of Git. There are some very useful features like stashing not found with SVN which can make the life easier. But on the other side not all people will need all features. So I cannot see the dead of SVN.
Git needs some better documentation and the error reporting must be more helpful. Also the existing useful GUIs are only rarely. This time I have only found 1 GUI for Linux with support of most Git features (git-cola). Eclipse integration is working but its not official released and there is no official update site (only some external update site with periodical builds from the trunk http://www.jgit.org/updates)
So the most preferred way to use Git this days is the command line.
A: Eric Sink from SourceGear wrote series of articles on differences between distributed and nondistributed version controls systems. He compares pros and cons of most popular version control systems. Very interesting reading.
Articles can be found on his blog, www.ericsink.com:
*
*Read the Diffs
*Git is the C of Version Control Tools
*On Git's lack of respect for immutability and the Best Practices for a DVCS
*DVCS and DAGs, Part 1
*DVCS and DAGs, Part 2
*DVCS and Bug Tracking
*Merge History, DAGs and Darcs
*Why is Git so Fast?
*Mercurial, Subversion, and Wesley Snipes
A: For people looking for a good Git GUI, Syntevo SmartGit might be a good solution. Its proprietary, but free for non-commercial use, runs on Windows/Mac/Linux and even supports SVN using some kind of git-svn bridge, I think.
A: The funny thing is:
I host projects in Subversion Repos, but access them via the Git Clone command.
Please read Develop with Git on a Google Code Project
Although Google Code natively speaks
Subversion, you can easily use Git
during development. Searching for "git
svn" suggests this practice is
widespread, and we too encourage you
to experiment with it.
Using Git on a Svn Repository gives me benefits:
*
*I can work distributed on several
machines, commiting and pulling from
and to them
*I have a central backup/public svn repository for others to check out
*And they are free to use Git for their own
A: With Git, you can do practically anything offline, because everybody has their own repository.
Making branches and merging between branches is really easy.
Even if you don't have commit rights for a project, you can still have your own repository online, and publish "push requests" for your patches. Everybody who likes your patches can pull them into their project, including the official maintainers.
It's trivial to fork a project, modify it, and still keep merging in the bugfixes from the HEAD branch.
Git works for the Linux kernel developers. That means it is really fast (it has to be), and scales to thousands of contributors. Git also uses less space (up to 30 times less space for the Mozilla repository).
Git is very flexible, very TIMTOWTDI (There is more than one way to do it). You can use whatever workflow you want, and Git will support it.
Finally, there's GitHub, a great site for hosting your Git repositories.
Drawbacks of Git:
*
*it's much harder to learn, because Git has more concepts and more commands.
*revisions don't have version numbers like in subversion
*many Git commands are cryptic, and error messages are very user-unfriendly
*it lacks a good GUI (such as the great TortoiseSVN)
A: Other answers have done a good job of explaining the core features of Git (which are great). But there's also so many little ways that Git behaves better and helps keep my life more sane. Here are some of the little things:
*
*Git has a 'clean' command. SVN desperately needs this command, considering how frequently it will dump extra files on your disk.
*Git has the 'bisect' command. It's nice.
*SVN creates .svn directories in every single folder (Git only creates one .git directory). Every script you write, and every grep you do, will need to be written to ignore these .svn directories. You also need an entire command ("svn export") just to get a sane copy of your files.
*In SVN, each file & folder can come from a different revision or branch. At first, it sounds nice to have this freedom. But what this actually means is that there is a million different ways for your local checkout to be completely screwed up. (for example, if "svn switch" fails halfway through, or if you enter a command wrong). And the worst part is: if you ever get into a situation where some of your files are coming from one place, and some of them from another, the "svn status" will tell you that everything is normal. You'll need to do "svn info" on each file/directory to discover how weird things are. If "git status" tells you that things are normal, then you can trust that things really are normal.
*You have to tell SVN whenever you move or delete something. Git will just figure it out.
*Ignore semantics are easier in Git. If you ignore a pattern (such as *.pyc), it will be ignored for all subdirectories. (But if you really want to ignore something for just one directory, you can). With SVN, it seems that there is no easy way to ignore a pattern across all subdirectories.
*Another item involving ignore files. Git makes it possible to have "private" ignore settings (using the file .git/info/exclude), which won't affect anyone else.
A: All the answers here are as expected, programmer centric, however what happens if your company uses revision control outside of source code? There are plenty of documents which aren't source code which benefit from version control, and should live close to code and not in another CMS. Most programmers don't work in isolation - we work for companies as part of a team.
With that in mind, compare ease of use, in both client tooling and training, between Subversion and git. I can't see a scenario where any distributed revision control system is going to be easier to use or explain to a non-programmer. I'd love to be proven wrong, because then I'd be able to evaluate git and actually have a hope of it being accepted by people who need version control who aren't programmers.
Even then, if asked by management why we should move from a centralised to distributed revision control system, I'd be hard pressed to give an honest answer, because we don't need it.
Disclaimer: I became interested in Subversion early on (around v0.29) so obviously I'm biased, but the companies I've worked for since that time are benefiting from my enthusiasm because I've encouraged and supported its use. I suspect this is how it happens with most software companies. With so many programmers jumping on the git bandwagon, I wonder how many companies are going to miss out on the benefits of using version control outside of source code? Even if you have separate systems for different teams, you're missing out on some of the benefits, such as (unified) issue tracking integration, whilst increasing maintenance, hardware and training requirements.
A: First, concurrent version control seems like an easy problem to solve. It's not at all. Anyway...
SVN is quite non-intuitive. Git is even worse. [sarcastic-speculation] This might be because developers, that like hard problems like concurrent version control, don't have much interest in making a good UI. [/sarcastic-speculation]
SVN supporters think they don't need a distributed version-control system. I thought that too. But now that we use Git exclusively, I'm a believer. Now version control works for me AND the team/project instead of just working for the project. When I need a branch, I branch. Sometimes it's a branch that has a corresponding branch on the server, and sometimes it does not. Not to mention all the other advantages that I'll have to go study up on (thanks in part to the arcane and absurd lack of UI that is a modern version control system).
A: Git in Windows is quite well supported now.
Check out GitExtensions = http://code.google.com/p/gitextensions/
and the manual for a better Windows Git experience.
A: http://subversion.wandisco.com/component/content/article/1/40.html
I think it's fairly safe to say that amongst developers, the SVN Vs. Git argument has been raging for some time now, with everyone having their own view on which is better. This was even brought up in the of the questions during our Webinar on Subversion in 2010 and Beyond.
Hyrum Wright, our Director of Open Source and the President for the Subversion Corporation talks about the differences between Subversion and Git, along with other Distributed Version Control Systems (DVCS).
He also talks about the upcoming changes in Subversion, such as Working Copy Next Generation (WC-NG), which he believes will cause a number of Git users to convert back to Subversion.
Have a watch of his video and let us know what you think by either commenting on this blog, or by posting in our forums. Registration is simple and will only take a moment!
A: I have been dwelling in Git land lately, and I like it for personal projects, but I wouldn't be able to switch work projects to it yet from Subversion given the change in thinking of required from staff, without no pressing benefits. Moreover the biggest project we run in-house is extremely dependent on svn:externals which, from what I've seen so far, does not work so nicely and seamlessly in Git.
A: Why I think Subversion is better than Git (at least for the projects I work on), mainly due to its usability, and simpler workflow:
http://www.databasesandlife.com/why-subversion-is-better-than-git/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "393"
} |
Q: Are PHP Variables passed by value or by reference? Are PHP variables passed by value or by reference?
A: In PHP, by default, objects are passed as reference to a new object.
See this example:
class X {
var $abc = 10;
}
class Y {
var $abc = 20;
function changeValue($obj)
{
$obj->abc = 30;
}
}
$x = new X();
$y = new Y();
echo $x->abc; //outputs 10
$y->changeValue($x);
echo $x->abc; //outputs 30
Now see this:
class X {
var $abc = 10;
}
class Y {
var $abc = 20;
function changeValue($obj)
{
$obj = new Y();
}
}
$x = new X();
$y = new Y();
echo $x->abc; //outputs 10
$y->changeValue($x);
echo $x->abc; //outputs 10 not 20 same as java does.
Now see this:
class X {
var $abc = 10;
}
class Y {
var $abc = 20;
function changeValue(&$obj)
{
$obj = new Y();
}
}
$x = new X();
$y = new Y();
echo $x->abc; //outputs 10
$y->changeValue($x);
echo $x->abc; //outputs 20 not possible in java.
I hope you can understand this.
A: It seems a lot of people get confused by the way objects are passed to functions and what passing by reference means. Object are still passed by value, it's just the value that is passed in PHP5 is a reference handle. As proof:
<?php
class Holder {
private $value;
public function __construct($value) {
$this->value = $value;
}
public function getValue() {
return $this->value;
}
}
function swap($x, $y) {
$tmp = $x;
$x = $y;
$y = $tmp;
}
$a = new Holder('a');
$b = new Holder('b');
swap($a, $b);
echo $a->getValue() . ", " . $b->getValue() . "\n";
Outputs:
a, b
To pass by reference means we can modify the variables that are seen by the caller, which clearly the code above does not do. We need to change the swap function to:
<?php
function swap(&$x, &$y) {
$tmp = $x;
$x = $y;
$y = $tmp;
}
$a = new Holder('a');
$b = new Holder('b');
swap($a, $b);
echo $a->getValue() . ", " . $b->getValue() . "\n";
Outputs:
b, a
in order to pass by reference.
A: You can do it either way.
Put an '&' symbol in front and the variable you are passing becomes one and the same as its origin i.e. you can pass by reference, rather than make a copy of it.
so
$fred = 5;
$larry = & $fred;
$larry = 8;
echo $fred;//this will output 8, as larry and fred are now the same reference.
A: TL;DR: PHP supports both pass by value and pass by reference. References are declared using an ampersand (&); this is very similar to how C++ does it. When the formal parameter of a function is not declared with an ampersand (i.e., it's not a reference), everything is passed by value, including objects. There is no distinction between how objects and primitives are passed around. The key is to understand what gets passed along when you pass in objects to a function. This is where understanding pointers is invaluable.
For anyone who comes across this in the future, I want to share this gem from the PHP docs, posted by an anonymous user:
There seems to be some confusion here. The distinction between pointers and references is not particularly helpful.
The behavior in some of the "comprehensive" examples already posted can be explained in simpler unifying terms. Hayley's code, for example, is doing EXACTLY what you should expect it should. (Using >= 5.3)
First principle:
A pointer stores a memory address to access an object. Any time an object is assigned, a pointer is generated. (I haven't delved TOO deeply into the Zend engine yet, but as far as I can see, this applies)
2nd principle, and source of the most confusion:
Passing a variable to a function is done by default as a value pass, ie, you are working with a copy. "But objects are passed by reference!" A common misconception both here and in the Java world. I never said a copy OF WHAT. The default passing is done by value. Always. WHAT is being copied and passed, however, is the pointer. When using the "->", you will of course be accessing the same internals as the original variable in the caller function. Just using "=" will only play with copies.
3rd principle:
"&" automatically and permanently sets another variable name/pointer to the same memory address as something else until you decouple them. It is correct to use the term "alias" here. Think of it as joining two pointers at the hip until forcibly separated with "unset()". This functionality exists both in the same scope and when an argument is passed to a function. Often the passed argument is called a "reference," due to certain distinctions between "passing by value" and "passing by reference" that were clearer in C and C++.
Just remember: pointers to objects, not objects themselves, are passed to functions. These pointers are COPIES of the original unless you use "&" in your parameter list to actually pass the originals. Only when you dig into the internals of an object will the originals change.
And here's the example they provide:
<?php
//The two are meant to be the same
$a = "Clark Kent"; //a==Clark Kent
$b = &$a; //The two will now share the same fate.
$b="Superman"; // $a=="Superman" too.
echo $a;
echo $a="Clark Kent"; // $b=="Clark Kent" too.
unset($b); // $b divorced from $a
$b="Bizarro";
echo $a; // $a=="Clark Kent" still, since $b is a free agent pointer now.
//The two are NOT meant to be the same.
$c="King";
$d="Pretender to the Throne";
echo $c."\n"; // $c=="King"
echo $d."\n"; // $d=="Pretender to the Throne"
swapByValue($c, $d);
echo $c."\n"; // $c=="King"
echo $d."\n"; // $d=="Pretender to the Throne"
swapByRef($c, $d);
echo $c."\n"; // $c=="Pretender to the Throne"
echo $d."\n"; // $d=="King"
function swapByValue($x, $y){
$temp=$x;
$x=$y;
$y=$temp;
//All this beautiful work will disappear
//because it was done on COPIES of pointers.
//The originals pointers still point as they did.
}
function swapByRef(&$x, &$y){
$temp=$x;
$x=$y;
$y=$temp;
//Note the parameter list: now we switched 'em REAL good.
}
?>
I wrote an extensive, detailed blog post on this subject for JavaScript, but I believe it applies equally well to PHP, C++, and any other language where people seem to be confused about pass by value vs. pass by reference.
Clearly, PHP, like C++, is a language that does support pass by reference. By default, objects are passed by value. When working with variables that store objects, it helps to see those variables as pointers (because that is fundamentally what they are, at the assembly level). If you pass a pointer by value, you can still "trace" the pointer and modify the properties of the object being pointed to. What you cannot do is have it point to a different object. Only if you explicitly declare a parameter as being passed by reference will you be able to do that.
A: Variables containing primitive types are passed by value in PHP5. Variables containing objects are passed by reference. There's quite an interesting article from Linux Journal from 2006 which mentions this and other OO differences between 4 and 5.
http://www.linuxjournal.com/article/9170
A: It's by value according to the PHP Documentation.
By default, function arguments are passed by value (so that if the value of the argument within the function is changed, it does not get changed outside of the function). To allow a function to modify its arguments, they must be passed by reference.
To have an argument to a function always passed by reference, prepend an ampersand (&) to the argument name in the function definition.
<?php
function add_some_extra(&$string)
{
$string .= 'and something extra.';
}
$str = 'This is a string, ';
add_some_extra($str);
echo $str; // outputs 'This is a string, and something extra.'
?>
A: http://www.php.net/manual/en/migration5.oop.php
In PHP 5 there is a new Object Model. PHP's handling of objects has been completely rewritten, allowing for better performance and more features. In previous versions of PHP, objects were handled like primitive types (for instance integers and strings). The drawback of this method was that semantically the whole object was copied when a variable was assigned, or passed as a parameter to a method. In the new approach, objects are referenced by handle, and not by value (one can think of a handle as an object's identifier).
A: PHP variables are assigned by value, passed to functions by value and when containing/representing objects are passed by reference. You can force variables to pass by reference using an '&'.
Assigned by value/reference example:
$var1 = "test";
$var2 = $var1;
$var2 = "new test";
$var3 = &$var2;
$var3 = "final test";
print ("var1: $var1, var2: $var2, var3: $var3);
output:
var1: test, var2: final test, var3: final test
Passed by value/reference example:
$var1 = "foo";
$var2 = "bar";
changeThem($var1, $var2);
print "var1: $var1, var2: $var2";
function changeThem($var1, &$var2){
$var1 = "FOO";
$var2 = "BAR";
}
output:
var1: foo, var2 BAR
Object variables passed by reference example:
class Foo{
public $var1;
function __construct(){
$this->var1 = "foo";
}
public function printFoo(){
print $this->var1;
}
}
$foo = new Foo();
changeFoo($foo);
$foo->printFoo();
function changeFoo($foo){
$foo->var1 = "FOO";
}
output:
FOO
(The last example could be better probably.)
A: You can pass a variable to a function by reference. This function will be able to modify the original variable.
You can define the passage by reference in the function definition:
<?php
function changeValue(&$var)
{
$var++;
}
$result=5;
changeValue($result);
echo $result; // $result is 6 here
?>
A: Objects are passed by reference in PHP 5 and by value in PHP 4.
Variables are passed by value by default!
Read here: http://www.webeks.net/programming/php/ampersand-operator-used-for-assigning-reference.html
A: class Holder
{
private $value;
public function __construct( $value )
{
$this->value = $value;
}
public function getValue()
{
return $this->value;
}
public function setValue( $value )
{
return $this->value = $value;
}
}
class Swap
{
public function SwapObjects( Holder $x, Holder $y )
{
$tmp = $x;
$x = $y;
$y = $tmp;
}
public function SwapValues( Holder $x, Holder $y )
{
$tmp = $x->getValue();
$x->setValue($y->getValue());
$y->setValue($tmp);
}
}
$a1 = new Holder('a');
$b1 = new Holder('b');
$a2 = new Holder('a');
$b2 = new Holder('b');
Swap::SwapValues($a1, $b1);
Swap::SwapObjects($a2, $b2);
echo 'SwapValues: ' . $a2->getValue() . ", " . $b2->getValue() . "<br>";
echo 'SwapObjects: ' . $a1->getValue() . ", " . $b1->getValue() . "<br>";
Attributes are still modifiable when not passed by reference so beware.
Output:
SwapObjects: b, a
SwapValues: a, b
A: Regarding how objects are passed to functions you still need to understand that without "&", you pass to the function an object handle , object handle that is still passed by value , and it contains the value of a pointer. But you can not change this pointer until you pass it by reference using the "&"
<?php
class Example
{
public $value;
}
function test1($x)
{
//let's say $x is 0x34313131
$x->value = 1; //will reflect outsite of this function
//php use pointer 0x34313131 and search for the
//address of 'value' and change it to 1
}
function test2($x)
{
//$x is 0x34313131
$x = new Example;
//now $x is 0x88888888
//this will NOT reflect outside of this function
//you need to rewrite it as "test2(&$x)"
$x->value = 1000; //this is 1000 JUST inside this function
}
$example = new Example;
$example->value = 0;
test1($example); // $example->value changed to 1
test2($example); // $example did NOT changed to a new object
// $example->value is still 1
?>
A: Actually both methods are valid but it depends upon your requirement. Passing values by reference often makes your script slow. So it's better to pass variables by value considering time of execution. Also the code flow is more consistent when you pass variables by value.
A: Use this for functions when you wish to simply alter the original variable and return it again to the same variable name with its new value assigned.
function add(&$var){ // The & is before the argument $var
$var++;
}
$a = 1;
$b = 10;
add($a);
echo "a is $a,";
add($b);
echo " a is $a, and b is $b"; // Note: $a and $b are NOT referenced
A: A PHP reference is an alias, allowing two different variables to write to the same value.
And in PHP, if you have a variable that contains an object, that variable does not contain the object itself. Instead, it contains an identifier for that object. The object accessor will use the identifier to find the actual object. So when we use the object as an argument in function or assign it to another variable, we will be copying the identifier that points to the object itself.
https://hsalem.com/posts/you-think-you-know-php.html
class Type {}
$x = new Type();
$y = $x;
$y = "New value";
var_dump($x); // Will print the object.
var_dump($y); // Will print the "New value"
$z = &$x; // $z is a reference of $x
$z = "New value";
var_dump($x); // Will print "New value"
var_dump($z); // Will print "New value"
A: Depends on the version, 4 is by value, 5 is by reference.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "304"
} |
Q: How do you debug PHP scripts? How do you debug PHP scripts?
I am aware of basic debugging such as using the Error Reporting. The breakpoint debugging in PHPEclipse is also quite useful.
What is the best (in terms of fast and easy) way to debug in phpStorm or any other IDE?
A: PhpEdit has a built in debugger, but I usually end up using echo(); and print_r(); the old fashioned way!!
A: You can use Firephp an add-on to firebug to debug php in the same environment as javascript.
I also use Xdebug mentioned earlier for profiling php.
A: For the really gritty problems that would be too time consuming to use print_r/echo to figure out I use my IDE's (PhpEd) debugging feature. Unlike other IDEs I've used, PhpEd requires pretty much no setup. the only reason I don't use it for any problems I encounter is that it's painfully slow. I'm not sure that slowness is specific to PhpEd or any php debugger. PhpEd is not free but I believe it uses one of the open-source debuggers (like XDebug previously mentioned) anyway. The benefit with PhpEd, again, is that it requires no setup which I have found really pretty tedious in the past.
A: Manual debugging is generally quicker for me - var_dump() and debug_print_backtrace() are all the tools you need to arm your logic with.
A: This is my little debug environment:
error_reporting(-1);
assert_options(ASSERT_ACTIVE, 1);
assert_options(ASSERT_WARNING, 0);
assert_options(ASSERT_BAIL, 0);
assert_options(ASSERT_QUIET_EVAL, 0);
assert_options(ASSERT_CALLBACK, 'assert_callcack');
set_error_handler('error_handler');
set_exception_handler('exception_handler');
register_shutdown_function('shutdown_handler');
function assert_callcack($file, $line, $message) {
throw new Customizable_Exception($message, null, $file, $line);
}
function error_handler($errno, $error, $file, $line, $vars) {
if ($errno === 0 || ($errno & error_reporting()) === 0) {
return;
}
throw new Customizable_Exception($error, $errno, $file, $line);
}
function exception_handler(Exception $e) {
// Do what ever!
echo '<pre>', print_r($e, true), '</pre>';
exit;
}
function shutdown_handler() {
try {
if (null !== $error = error_get_last()) {
throw new Customizable_Exception($error['message'], $error['type'], $error['file'], $error['line']);
}
} catch (Exception $e) {
exception_handler($e);
}
}
class Customizable_Exception extends Exception {
public function __construct($message = null, $code = null, $file = null, $line = null) {
if ($code === null) {
parent::__construct($message);
} else {
parent::__construct($message, $code);
}
if ($file !== null) {
$this->file = $file;
}
if ($line !== null) {
$this->line = $line;
}
}
}
A: Xdebug and the DBGp plugin for Notepad++ for heavy duty bug hunting, FirePHP for lightweight stuff. Quick and dirty? Nothing beats dBug.
A: Well, to some degree it depends on where things are going south. That's the first thing I try to isolate, and then I'll use echo/print_r() as necessary.
NB: You guys know that you can pass true as a second argument to print_r() and it'll return the output instead of printing it? E.g.:
echo "<pre>".print_r($var, true)."</pre>";
A: I often use CakePHP when Rails isn't possible. To debug errors I usually find the error.log in the tmp folder and tail it in the terminal with the command...
tail -f app/tmp/logs/error.log
It give's you running dialog from cake of what is going on, which is pretty handy, if you want to output something to it mid code you can use.
$this->log('xxxx');
This can usually give you a good idea of what is going on/wrong.
A: XDebug is essential for development. I install it before any other extension. It gives you stack traces on any error and you can enable profiling easily.
For a quick look at a data structure use var_dump(). Don't use print_r() because you'll have to surround it with <pre> and it only prints one var at a time.
<?php var_dump(__FILE__, __LINE__, $_REQUEST); ?>
For a real debugging environment the best I've found is Komodo IDE but it costs $$.
A: print_r( debug_backtrace() );
or something like that :-)
A: Komodo IDE works well with xdebug, even for the remore debugging. It needs minimum amount of configuration. All you need is a version of php that Komodo can use locally to step through the code on a breakpoint. If you have the script imported into komodo project, then you can set breakpoints with a mouse-click just how you would set it inside eclipse for debugging a java program.
Remote debugging is obviously more tricky to get it to work correctly ( you might have to map the remote url with a php script in your workspace ) than a local debugging setup which is pretty easy to configure if you are on a MAC or a linux desktop.
A: Nusphere is also a good debugger for php
nusphere
A: There are many PHP debugging techniques that can save you countless hours when coding. An effective but basic debugging technique is to simply turn on error reporting. Another slightly more advanced technique involves using print statements, which can help pinpoint more elusive bugs by displaying what is actually going onto the screen. PHPeclipse is an Eclipse plug-in that can highlight common syntax errors and can be used in conjunction with a debugger to set breakpoints.
display_errors = Off
error_reporting = E_ALL
display_errors = On
and also used
error_log();
console_log();
A: PhpEd is really good. You can step into/over/out of functions. You can run ad-hoc code, inspect variables, change variables. It is amazing.
A: 1) I use print_r(). In TextMate, I have a snippet for 'pre' which expands to this:
echo "<pre>";
print_r();
echo "</pre>";
2) I use Xdebug, but haven't been able to get the GUI to work right on my Mac. It at least prints out a readable version of the stack trace.
A: I've used the Zend Studio (5.5), together with Zend Platform. That gives proper debugging, breakpoints/stepping over the code etc., although at a price.
A: In all honesty, a combination of print and print_r() to print out the variables. I know that many prefer to use other more advanced methods but I find this the easiest to use.
I will say that I didn't fully appreciate this until I did some Microprocessor programming at Uni and was not able to use even this.
A: Try Eclipse PDT to setup an Eclipse environment that has debugging features like you mentioned. The ability to step into the code is a much better way to debug then the old method of var_dump and print at various points to see where your flow goes wrong. When all else fails though and all I have is SSH and vim I still var_dump()/die() to find where the code goes south.
A: Xdebug, by Derick Rethans, is very good. I used it some time ago and found it was not so easy to install. Once you're done, you won't understand how you managed without it :-)
There is a good article on Zend Developer Zone (installing on Linux doesn't seem any easier) and even a Firefox plugin, which I never used.
A: I use Netbeans with XDebug.
Check it out at its website for docs on how to configure it.
http://php.netbeans.org/
A: I use Netbeans with XDebug and the Easy XDebug FireFox Add-on
The add-on is essential when you debug MVC projects, because the normal way XDebug runs in Netbeans is to register the dbug session via the url. With the add-on installed in FireFox, you would set your Netbeans project properties -> Run Configuratuion -> Advanced and select "Do Not Open Web Browser" You can now set your break points and start the debugging session with Ctrl-F5 as usual. Open FireFox and right-click the Add-on icon in the right bottom corner to start monitoring for breakpoints. When the code reaches the breakpoint it will stop and you can inspect your variable states and call-stack.
A: Output buffering is very useful if you don't want to mess up your output. I do this in a one-liner which I can comment/uncomment at will
ob_start();var_dump(); user_error(ob_get_contents()); ob_get_clean();
A: In a production environment, I log relevant data to the server's error log with error_log().
A: i use zend studio for eclipse with the built in debugger. Its still slow compared to debugging with eclipse pdt with xdebug. Hopefully they will fix those issues, the speed has improved over the recent releases but still stepping over things takes 2-3 seconds.
The zend firefox toolbar really makes things easy (debug next page, current page, etc). Also it provides a profiler that will benchmark your code and provide pie-charts, execution time, etc.
A: The most of bugs can be found easily by simply var_dumping some of key variables, but it obviously depends on what kind of application you develop.
For a more complex algorithms the step/breakpoint/watch functions are very helpful (if not necessary)
A: PHP DBG
The Interactive Stepthrough PHP Debugger implemented as a SAPI module which can give give you complete control over the environment without impacting the functionality or performance of your code. It aims to be a lightweight, powerful, easy to use debugging platform for PHP 5.4+ and it's shipped out-of-box with PHP 5.6.
Features includes:
*
*Stepthrough Debugging
*Flexible Breakpoints (Class Method, Function, File:Line, Address, Opcode)
*Easy Access to PHP with built-in eval()
*Easy Access to Currently Executing Code
*Userland API
*SAPI Agnostic - Easily Integrated
*PHP Configuration File Support
*JIT Super Globals - Set Your Own!!
*Optional readline Support - Comfortable Terminal Operation
*Remote Debugging Support - Bundled Java GUI
*Easy Operation
See the screenshots:
Home page: http://phpdbg.com/
PHP Error - Better error reporting for PHP
This is very easy to use library (actually a file) to debug your PHP scripts.
The only thing that you need to do is to include one file as below (at the beginning on your code):
require('php_error.php');
\php_error\reportErrors();
Then all errors will give you info such as backtrace, code context, function arguments, server variables, etc. For example:
Features include:
*
*trivial to use, it's just one file
*errors displayed in the browser for normal and ajaxy requests
*AJAX requests are paused, allowing you to automatically re-run them
*makes errors as strict as possible (encourages code quality, and tends to improve performance)
*code snippets across the whole stack trace
*provides more information (such as full function signatures)
*fixes some error messages which are just plain wrong
*syntax highlighting
*looks pretty!
*customization
*manually turn it on and off
*run specific sections without error reporting
*ignore files allowing you to avoid highlighting code in your stack trace
*application files; these are prioritized when an error strikes!
Home page: http://phperror.net/
GitHub: https://github.com/JosephLenton/PHP-Error
My fork (with extra fixes): https://github.com/kenorb-contrib/PHP-Error
DTrace
If your system supports DTrace dynamic tracing (installed by default on OS X) and your PHP is compiled with the DTrace probes enabled (--enable-dtrace) which should be by default, this command can help you to debug PHP script with no time:
sudo dtrace -qn 'php*:::function-entry { printf("%Y: PHP function-entry:\t%s%s%s() in %s:%d\n", walltimestamp, copyinstr(arg3), copyinstr(arg4), copyinstr(arg0), basename(copyinstr(arg1)), (int)arg2); }'
So given the following alias has been added into your rc files (e.g. ~/.bashrc, ~/.bash_aliases):
alias trace-php='sudo dtrace -qn "php*:::function-entry { printf(\"%Y: PHP function-entry:\t%s%s%s() in %s:%d\n\", walltimestamp, copyinstr(arg3), copyinstr(arg4), copyinstr(arg0), basename(copyinstr(arg1)), (int)arg2); }"'
you may trace your script with easy to remember alias: trace-php.
Here is more advanced dtrace script, just save it into dtruss-php.d, make it executable (chmod +x dtruss-php.d) and run:
#!/usr/sbin/dtrace -Zs
# See: https://github.com/kenorb/dtruss-lamp/blob/master/dtruss-php.d
#pragma D option quiet
php*:::compile-file-entry
{
printf("%Y: PHP compile-file-entry:\t%s (%s)\n", walltimestamp, basename(copyinstr(arg0)), copyinstr(arg1));
}
php*:::compile-file-return
{
printf("%Y: PHP compile-file-return:\t%s (%s)\n", walltimestamp, basename(copyinstr(arg0)), basename(copyinstr(arg1)));
}
php*:::error
{
printf("%Y: PHP error message:\t%s in %s:%d\n", walltimestamp, copyinstr(arg0), basename(copyinstr(arg1)), (int)arg2);
}
php*:::exception-caught
{
printf("%Y: PHP exception-caught:\t%s\n", walltimestamp, copyinstr(arg0));
}
php*:::exception-thrown
{
printf("%Y: PHP exception-thrown:\t%s\n", walltimestamp, copyinstr(arg0));
}
php*:::execute-entry
{
printf("%Y: PHP execute-entry:\t%s:%d\n", walltimestamp, basename(copyinstr(arg0)), (int)arg1);
}
php*:::execute-return
{
printf("%Y: PHP execute-return:\t%s:%d\n", walltimestamp, basename(copyinstr(arg0)), (int)arg1);
}
php*:::function-entry
{
printf("%Y: PHP function-entry:\t%s%s%s() in %s:%d\n", walltimestamp, copyinstr(arg3), copyinstr(arg4), copyinstr(arg0), basename(copyinstr(arg1)), (int)arg2);
}
php*:::function-return
{
printf("%Y: PHP function-return:\t%s%s%s() in %s:%d\n", walltimestamp, copyinstr(arg3), copyinstr(arg4), copyinstr(arg0), basename(copyinstr(arg1)), (int)arg2);
}
php*:::request-shutdown
{
printf("%Y: PHP request-shutdown:\t%s at %s via %s\n", walltimestamp, basename(copyinstr(arg0)), copyinstr(arg1), copyinstr(arg2));
}
php*:::request-startup
{
printf("%Y, PHP request-startup:\t%s at %s via %s\n", walltimestamp, basename(copyinstr(arg0)), copyinstr(arg1), copyinstr(arg2));
}
Home page: dtruss-lamp at GitHub
Here is simple usage:
*
*Run: sudo dtruss-php.d.
*On another terminal run: php -r "phpinfo();".
To test that, you can go to any docroot with index.php and run PHP builtin server by:
php -S localhost:8080
After that you can access the site at http://localhost:8080/ (or choose whatever port is convenient for you). From there access some pages to see the trace output.
Note: Dtrace is available on OS X by default, on Linux you probably need dtrace4linux or check for some other alternatives.
See: Using PHP and DTrace at php.net
SystemTap
Alternatively check for SystemTap tracing by installing SystemTap SDT development package (e.g. yum install systemtap-sdt-devel).
Here is example script (all_probes.stp) for tracing all core PHP static probe points throughout the duration of a running PHP script with SystemTap:
probe process("sapi/cli/php").provider("php").mark("compile__file__entry") {
printf("Probe compile__file__entry\n");
printf(" compile_file %s\n", user_string($arg1));
printf(" compile_file_translated %s\n", user_string($arg2));
}
probe process("sapi/cli/php").provider("php").mark("compile__file__return") {
printf("Probe compile__file__return\n");
printf(" compile_file %s\n", user_string($arg1));
printf(" compile_file_translated %s\n", user_string($arg2));
}
probe process("sapi/cli/php").provider("php").mark("error") {
printf("Probe error\n");
printf(" errormsg %s\n", user_string($arg1));
printf(" request_file %s\n", user_string($arg2));
printf(" lineno %d\n", $arg3);
}
probe process("sapi/cli/php").provider("php").mark("exception__caught") {
printf("Probe exception__caught\n");
printf(" classname %s\n", user_string($arg1));
}
probe process("sapi/cli/php").provider("php").mark("exception__thrown") {
printf("Probe exception__thrown\n");
printf(" classname %s\n", user_string($arg1));
}
probe process("sapi/cli/php").provider("php").mark("execute__entry") {
printf("Probe execute__entry\n");
printf(" request_file %s\n", user_string($arg1));
printf(" lineno %d\n", $arg2);
}
probe process("sapi/cli/php").provider("php").mark("execute__return") {
printf("Probe execute__return\n");
printf(" request_file %s\n", user_string($arg1));
printf(" lineno %d\n", $arg2);
}
probe process("sapi/cli/php").provider("php").mark("function__entry") {
printf("Probe function__entry\n");
printf(" function_name %s\n", user_string($arg1));
printf(" request_file %s\n", user_string($arg2));
printf(" lineno %d\n", $arg3);
printf(" classname %s\n", user_string($arg4));
printf(" scope %s\n", user_string($arg5));
}
probe process("sapi/cli/php").provider("php").mark("function__return") {
printf("Probe function__return: %s\n", user_string($arg1));
printf(" function_name %s\n", user_string($arg1));
printf(" request_file %s\n", user_string($arg2));
printf(" lineno %d\n", $arg3);
printf(" classname %s\n", user_string($arg4));
printf(" scope %s\n", user_string($arg5));
}
probe process("sapi/cli/php").provider("php").mark("request__shutdown") {
printf("Probe request__shutdown\n");
printf(" file %s\n", user_string($arg1));
printf(" request_uri %s\n", user_string($arg2));
printf(" request_method %s\n", user_string($arg3));
}
probe process("sapi/cli/php").provider("php").mark("request__startup") {
printf("Probe request__startup\n");
printf(" file %s\n", user_string($arg1));
printf(" request_uri %s\n", user_string($arg2));
printf(" request_method %s\n", user_string($arg3));
}
Usage:
stap -c 'sapi/cli/php test.php' all_probes.stp
See: Using SystemTap with PHP DTrace Static Probes at php.net
A: +1 for print_r(). Use it to dump out the contents of an object or variable. To make it more readable, do it with a pre tag so you don't need to view source.
echo '<pre>';
print_r($arrayOrObject);
Also var_dump($thing) - this is very useful to see the type of subthings
A: Depending on the issue I like a combination of error_reporting(E_ALL) mixed with echo tests (to find the offending line/file the error happened in initally; you KNOW it's not always the line/file php tells you right?), IDE brace matching (to resolve "Parse error: syntax error, unexpected $end" issues), and print_r(); exit; dumps (real programmers view the source ;p).
You also can't beat phpdebug (check sourceforge) with "memory_get_usage();" and "memory_get_peak_usage();" to find the problem areas.
A: The integrated debuggers where you can watch the values of variable change as you step through code are really cool. They do, however, require software setup on the server and a certain amount of configuration on the client. Both of which require periodic maintenance to keep in good working order.
A print_r is easy to write and is guaranteed to work in any setup.
A: Usually I find create a custom log function able to save on file, store debug info, and eventually re-print on a common footer.
You can also override common Exception class, so that this type of debugging is semi-automated.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "402"
} |
Q: Internationalization in your projects How have you implemented Internationalization (i18n) in actual projects you've worked on?
I took an interest in making software cross-cultural after I read the famous post by Joel, The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!). However, I have yet to able to take advantage of this in a real project, besides making sure I used Unicode strings where possible. But making all your strings Unicode and ensuring you understand what encoding everything you work with is in is just the tip of the i18n iceberg.
Everything I have worked on to date has been for use by a controlled set of US English speaking people, or i18n just wasn't something we had time to work on before pushing the project live. So I am looking for any tips or war stories people have about making software more localized in real world projects.
A: I worked on a project for my previous employer that used .NET, and there was a built in .resx format we used. We basically had a file that had all translations in the .resx file, and then multiple files with different translations. The consequence of this is that you have to be very diligent about ensuring that all strings visible in the application are stored in the .resx, and anytime one is changed you have to update all languages you support.
If you get lazy and don't notify the people in charge of translations, or you embed strings without going through your localization system, it will be a nightmare to try and fix it later. Similarly, if localization is an afterthought, it will be very difficult to put in place. Bottom line, if you don't have all visible strings stored externally in a standard place, it will be very difficult to find all that need to be localized.
One other note, very strictly avoid concatenating visible strings directly, such as
String message = "The " + item + " is on sale!";
Instead, you must use something like
String message = String.Format("The {0} is on sale!", item);
The reason for this is that different languages often order the words differently, and concatenating strings directly will need a new build to fix, but if you used some kind of string replacement mechanism like above, you can modify your .resx file (or whatever localization files you use) for the specific language that needs to reorder the words.
A: I was just listening to a Podcast from Scott Hanselman this morning, where he talks about internationalization, especially the really tricky things, like Turkish (with it's four i's) and Thai. Also, Jeff Atwood had a post:
A: It has been a while, so this is not comprehensive.
Character Sets
Unicode is great, but you can't get away with ignoring other character sets. The default character set on Windows XP (English) is Cp1252. On the web, you don't know what a browser will send you (though hopefully your container will handle most of this). And don't be surprised when there are bugs in whatever implementation you are using. Character sets can have interesting interactions with filenames when they move to between machines.
Translating Strings
Translators are, generally speaking, not coders. If you send a source file to a translator, they will break it. Strings should be extracted to resource files (e.g. properties files in Java or resource DLLs in Visual C++). Translators should be given files that are difficult to break and tools that don't let them break them.
Translators do not know where strings come from in a product. It is difficult to translate a string without context. If you do not provide guidance, the quality of the translation will suffer.
While on the subject of context, you may see the same string "foo" crop up in multiple times and think it would be more efficient to have all instances in the UI point to the same resource. This is a bad idea. Words may be very context-sensitive in some languages.
Translating strings costs money. If you release a new version of a product, it makes sense to recover the old versions. Have tools to recover strings from your old resource files.
String concatenation and manual manipulation of strings should be minimized. Use the format functions where applicable.
Translators need to be able to modify hotkeys. Ctrl+P is print in English; the Germans use Ctrl+D.
If you have a translation process that requires someone to manually cut and paste strings at any time, you are asking for trouble.
Dates, Times, Calendars, Currency, Number Formats, Time Zones
These can all vary from country to country. A comma may be used to denote decimal places. Times may be in 24hour notation. Not everyone uses the Gregorian calendar. You need to be unambiguous, too. If you take care to display dates as MM/DD/YYYY for the USA and DD/MM/YYYY for the UK on your website, the dates are ambiguous unless the user knows you've done it.
Especially Currency
The Locale functions provided in the class libraries will give you the local currency symbol, but you can't just stick a pound (sterling) or euro symbol in front of a value that gives a price in dollars.
User Interfaces
Layout should be dynamic. Not only are strings likely to double in length on translation, the entire UI may need to be inverted (Hebrew; Arabic) so that the controls run from right to left. And that is before we get to Asia.
Testing Prior To Translation
*
*Use static analysis of your code to locate problems. At a bare minimum, leverage the tools built into your IDE. (Eclipse users can go to Window > Preferences > Java > Compiler > Errors/Warnings and check for non-externalised strings.)
*Smoke test by simulating translation. It isn't difficult to parse a resource file and replace strings with a pseudo-translated version that doubles the length and inserts funky characters. You don't have to speak a language to use a foreign operating system. Modern systems should let you log in as a foreign user with translated strings and foreign locale. If you are familiar with your OS, you can figure out what does what without knowing a single word of the language.
*Keyboard maps and character set references are very useful.
*Virtualisation would be very useful here.
Non-technical Issues
Sometimes you have to be sensitive to cultural differences (offence or incomprehension may result). A mistake you often see is the use of flags as a visual cue choosing a website language or geography. Unless you want your software to declare sides in global politics, this is a bad idea. If you were French and offered the option for English with St. George's flag (the flag of England is a red cross on a white field), this might result in confusion for many English speakers - assume similar issues will arise with foreign languages and countries. Icons need to be vetted for cultural relevance. What does a thumbs-up or a green tick mean? Language should be relatively neutral - addressing users in a particular manner may be acceptable in one region, but considered rude in another.
Resources
C++ and Java programmers may find the ICU website useful: http://www.icu-project.org/
A: Besides all the previous tips, remember that i18n it's not just about changing words for their equivalent on other languages, especially for non-latin languages alphabets (korean, Arabic) which written right to left, so the whole UI will have to conform, like
*
*item 1
*item 2
*item 3
would have to be
arabic text 1 -
arabic text 2 -
arabic text 3 -
(reversed bullet list doesn't seem to work :P)
which can be a UI nightmare if your system has to apply changes dinamically once the user changes the language being used.
Another very hard thing is to test different languages, not just for the correctness of word, but since languages like Korean usually have bigger font type for their characters this may lead to language specific bugs (like "SAVE" text on a button being larger than the button itself for some language).
A: One of the funnier things to discover: italics and bold text makrup does not work with CJK (Chinese/Japanese/Korean) characters. They simply become unreadable. (OK, I couldn't really read them before either, but especially bolding just creates ink blots)
A: Some fun things:
*
*Having a PHP and MySQL Application that works well with German and French, but now needs to support Russian and Chinese. I think I move this over to .net, as PHP's Unicode support is - in my opinion - not really good. Sure, juggling around with utf8_de/encode or the mbstring-functions is fun. Almost as fun as having Freddy Krüger visit you at night...
*Realizing that some languages are a LOT more Verbose than others. German is a LOT more verbose than English usually, and seeing how the German Version destroys the User Interface because too little space was allocated was not fun. Some products gained some fame for their creative ways to work around that, with Oblivion's "Schw.Tr.d.Le.En.W." being memorable :-)
*Playing around with date formats, woohoo! Yes, there ARE actually people in the world who use date formats where the day goes in the middle. Sooooo much fun trying to find out what 07/02/2008 is supposed to mean, just because some users might believe it could be July 2... But then again, you guys over the pond may believe the same about users who put the month in the middle :-P, especially because in English, July 2 sounds a lot better than 2nd of July, something that does not neccessarily apply to other languages (i.e. in German, you would never say Juli 2 but always Zweiter Juli). I use 2008-02-07 whenever possible. It's clear that it means February 7 and it sorts properly, but dd/mm vs. mm/dd can be a really tricky problem.
*Anoter fun thing, Number formats! 10.000,50 vs 10,000.50 vs. 10 000,50 vs. 10'000,50... This is my biggest nightmare right now, having to support a multi-cultural environent but not having any way to reliably know what number format the user will use.
*Formal or Informal. In some language, there are two ways to address people, a formal way and a more informal way. In English, you just say "You", but in German you have to decide between the formal "Sie" and the informal "Du", same for French Tu/Vous. It's usually a safe bet to choose the formal way, but this is easily overlooked.
*Calendars. In Europe, the first day of the Week is Monday, whereas in the US it's Sunday. Calendar Widgets are nice. Showing a Calendar with Sunday on the left and Saturday on the right to a European user is not so nice, it confuses them.
A: I think everyone working in internationalization should be familiar with the Common Locale Data Repository, which is now a sub-project of Unicode:
Common Locale Data Repository
Those folks are working hard to establish a standard resource for all kinds of i18n issues: currency, geographical names, tons of stuff. Any project that's maintaining its own core local data given that this project exists is pretty bonkers, IMHO.
A: I suggest to use something like 99translations.com to maintain your translations . Otherwise you won't be able to tell what of your translations are up to date in every language.
A: Another challenge will be accepting input from your users. In many cases, this is eased by the input processing provided by the operating system, such as IME in Windows, which works transparently with common text widgets, but this facility will not be available for every possible need.
A: One website I use has a translation method the owner calls "wiki + machine translation". This is a community based site so is obviously different to the needs of companies.
http://blog.bookmooch.com/2007/09/23/how-bookmooch-does-its-translations/
A: One thing no one have mentioned yet is strings with some warying part as in "The unit will arive in 5 days" or "On Monday something happens." where 5 and Monday will change depending on state. It is not a good idea to split those in two and concatenate them. With only one varying part and good documentation you might get away with it, with two varying parts there will be some language that preferes to change the order of them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
} |
Q: How to break word after special character like Hyphens (-) Given a relatively simple CSS:
div {
width: 150px;
}
<div>
12333-2333-233-23339392-332332323
</div>
How do I make it so that the string stays constrained to the width
of 150, and wraps to a new line on the hyphen?
A: Your example works as expected in Google Chrome, Safari (Windows), and IE8. The text breaks out of the 150px box in Firefox 3 and Opera 9.5.
Additionally ­ won't work for your example, as it will either:
*
*work when word-breaking but when not word-breaking not display any hyphens, or
*work when not word-breaking but display two hyphens when word-breaking
since it adds a hyphen on a break.
A: In this specific instance (where your string is going to contain hyphens) I'd transform the text to this server-side:
<div style="width:150px;">
<span>12333-</span><span>2333-</span><span>233-</span><span>23339392-</span><span>332332323</span>
</div>
A: Replace your hyphens with this:
­
It's called a "soft" hyphen.
div {
width: 150px;
}
<div>
12333­2333­233­23339392­332332323
</div>
A: Depending on what you want to see exactly, you can use a combination of hyphen, soft hyphen, and/or zero width space.
On a soft hyphen, your browser can word-break (adding an hyphen).
On a zero width space, your browser can word break (without adding anything).
Thus, if your code is something like :
111111­222222­-333333​444444-​555555
then your browser will show this with no word-break :
1111112222222-33333334444444-5555555
and this will every possible word-break :
111111-
222222-
-333333
444444-
555555
Just pick up the option you need. In your case, it may be the one between 4s and 5s.
A: In all modern browsers* (and in some older browsers, too), the <wbr> element is the perfect tool for providing the opportunity to break long words at specific points.
To quote from that link:
The Word Break Opportunity (<wbr>) HTML element represents a position within text where the browser may optionally break a line, though its line-breaking rules would not otherwise create a break at that location.
Here's how it could be used to in the OP's example (or see it in action at JSFiddle):
<div style="width: 150px;">
12333-<wbr>2333-<wbr>233-<wbr>23339392-<wbr>332332323
</div>
*I've tested it in IE9, IE10, and the latest versions of Chrome, Firefox, and Opera, and Safari.
div {
width: 150px;
}
<div>
12333-<wbr>2333-<wbr>233-<wbr>23339392-<wbr>332332323
</div>
A: You can also use :
word-break:break-all;
ex.
<div style='width:10px'>ababababababbabaabababababababbabababa</div>
output:
abababababa
ababababbba
abbabbababa
ababb
word-break is break all the word or line even if no-space in sentence that not feets in provided width or height. nut for that you must be provide a width or height.
A: As part of CSS3, it is not yet fully supported, but you can find information on word-wrapping here. Another option is the wbr tag, ­, and ​ none of which are fully supported either.
A: The non-breaking hyphen works well.
HTML Entity (decimal)
‑
A: Instead of - you can use ‐ or \u2010.
Also, make sure the hyphens css property was not set to none (The default value is manual).
<wbr> is not supported by Internet Explorer.
A: Hope this may help
use <br>(break) tag where you want to break the line.
A: You can use 0 width space after hyphen character:
div {
width: 150px;
}
<div>
12333-​2333-​233-​23339392-​332332323
</div>
if You want line break before hyphen use ​- instead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "79"
} |
Q: Client collation and SQL Server 2005 We're upgrading an existing program from Win2k/SQL Server 2k to Windows 2003 and SQL Server 2005 as well as purchasing a new program that also uses 2k3/2k5. The vendor says that for us to host both databases we need to get the Enterprise version because the softwares clients use different collation for the connections and only Enterprise supports this.
I cannot find anything on MS's site to support this and, honestly, don't want to pay the extra for Enterprise if the Standard edition works. Am I missing some not talked about feature of SQL Server or is this, as I suspect, a vendor trying to upsell me?
A: All editions of SQL Server 2000/2005/2008 support having multiple databases, each using their own collation sequence. You don't need the Enterprise version.
When you have a database that uses a collation sequence that is different from default collation for the database server, you will need to take some extra precautions if you use temporary tables and/or table variables. Temp tables/variables live in the tempdb database, which uses the collation seqyuence used by by the master databases. Just remember to use "COLLATE database_default" when defining character fields in the temp tables/variables. I blogged about that not too long ago, if you want some more details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: How do I connect to a database and loop over a recordset in C#? What's the simplest way to connect and query a database for a set of records in C#?
A: This is an alternative way (DataReader is faster than this one):
string s = "";
SqlConnection conn = new SqlConnection("Server=192.168.1.1;Database=master;Connect Timeout=30;User ID=foobar;Password=raboof;");
SqlDataAdapter da = new SqlDataAdapter("SELECT TOP 5 name, dbid FROM sysdatabases", conn);
DataTable dt = new DataTable();
da.Fill(dt);
for (int i = 0; i < dt.Rows.Count; i++)
{
s += dt.Rows[i]["name"].ToString() + " -- " + dt.Rows[i]["dbid"].ToString() + "\n";
}
MessageBox.Show(s);
A: If you are querying a SQL Server database (Version 7 and up) you should replace the OleDb classes with corresponding classes in the System.Data.SqlClient namespace (SqlConnection, SqlCommand and SqlDataReader) as those classes have been optimized to work with SQL Server.
Another thing to note is that you should 'never' select all as this might lead to unexpected results later on if you add or remove columns to this table.
A: If you are intending on reading a large number of columns or records it's also worth caching the ordinals and accessing the strongly-typed methods, e.g.
using (DbDataReader dr = cmd.ExecuteReader()) {
if (dr.Read()) {
int idxColumnName = dr.GetOrdinal("columnName");
int idxSomethingElse = dr.GetOrdinal("somethingElse");
do {
Console.WriteLine(dr.GetString(idxColumnName));
Console.WriteLine(dr.GetInt32(idxSomethingElse));
} while (dr.Read());
}
}
A: @Goyuix -- that's excellent for something written from memory.
tested it here -- found the connection wasn't opened. Otherwise very nice.
using System.Data.OleDb;
...
using (OleDbConnection conn = new OleDbConnection())
{
conn.ConnectionString = "Provider=sqloledb;Data Source=yourServername\\yourInstance;Initial Catalog=databaseName;Integrated Security=SSPI;";
using (OleDbCommand cmd = new OleDbCommand())
{
conn.Open();
cmd.Connection = conn;
cmd.CommandText = "Select * from yourTable";
using (OleDbDataReader dr = cmd.ExecuteReader())
{
while (dr.Read())
{
Console.WriteLine(dr["columnName"]);
}
}
}
}
A: Very roughly and from memory since I don't have code on this laptop:
using (OleDBConnection conn = new OleDbConnection())
{
conn.ConnectionString = "Whatever connection string";
using (OleDbCommand cmd = new OleDbCommand())
{
cmd.Connection = conn;
cmd.CommandText = "Select * from CoolTable";
using (OleDbDataReader dr = cmd.ExecuteReader())
{
while (dr.Read())
{
// do something like Console.WriteLine(dr["column name"] as String);
}
}
}
}
A: That's definitely a good way to do it. But you if you happen to be using a database that supports LINQ to SQL, it can be a lot more fun. It can look something like this:
MyDB db = new MyDB("Data Source=...");
var q = from db.MyTable
select c;
foreach (var c in q)
Console.WriteLine(c.MyField.ToString());
A: I guess, you can try entity framework.
using (SchoolDBEntities ctx = new SchoolDBEntities())
{
IList<Course> courseList = ctx.GetCoursesByStudentId(1).ToList<Course>();
//do something with courselist here
}
A: Charge the libraries
using MySql.Data.MySqlClient;
This is the connection:
public static MySqlConnection obtenerconexion()
{
string server = "Server";
string database = "Name_Database";
string Uid = "User";
string pwd = "Password";
MySqlConnection conect = new MySqlConnection("server = " + server + ";" + "database =" + database + ";" + "Uid =" + Uid + ";" + "pwd=" + pwd + ";");
try
{
conect.Open();
return conect;
}
catch (Exception)
{
MessageBox.Show("Error. Ask the administrator", "An error has occurred while trying to connect to the system", MessageBoxButtons.OK, MessageBoxIcon.Error);
return conect;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49"
} |
Q: String literals and escape characters in postgresql Attempting to insert an escape character into a table results in a warning.
For example:
create table EscapeTest (text varchar(50));
insert into EscapeTest (text) values ('This is the first part \n And this is the second');
Produces the warning:
WARNING: nonstandard use of escape in a string literal
(Using PSQL 8.2)
Anyone know how to get around this?
A: The warning is issued since you are using backslashes in your strings. If you want to avoid the message, type this command "set standard_conforming_strings=on;". Then use "E" before your string including backslashes that you want postgresql to intrepret.
A: Cool.
I also found the documentation regarding the E:
http://www.postgresql.org/docs/8.3/interactive/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS
PostgreSQL also accepts "escape" string constants, which are an extension to the SQL standard. An escape string constant is specified by writing the letter E (upper or lower case) just before the opening single quote, e.g. E'foo'. (When continuing an escape string constant across lines, write E only before the first opening quote.) Within an escape string, a backslash character (\) begins a C-like backslash escape sequence, in which the combination of backslash and following character(s) represents a special byte value. \b is a backspace, \f is a form feed, \n is a newline, \r is a carriage return, \t is a tab. Also supported are \digits, where digits represents an octal byte value, and \xhexdigits, where hexdigits represents a hexadecimal byte value. (It is your responsibility that the byte sequences you create are valid characters in the server character set encoding.) Any other character following a backslash is taken literally. Thus, to include a backslash character, write two backslashes (\\). Also, a single quote can be included in an escape string by writing \', in addition to the normal way of ''.
A: I find it highly unlikely for Postgres to truncate your data on input - it either rejects it or stores it as is.
milen@dev:~$ psql
Welcome to psql 8.2.7, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h for help with SQL commands
\? for help with psql commands
\g or terminate with semicolon to execute query
\q to quit
milen=> create table EscapeTest (text varchar(50));
CREATE TABLE
milen=> insert into EscapeTest (text) values ('This will be inserted \n This will not be');
WARNING: nonstandard use of escape in a string literal
LINE 1: insert into EscapeTest (text) values ('This will be inserted...
^
HINT: Use the escape string syntax for escapes, e.g., E'\r\n'.
INSERT 0 1
milen=> select * from EscapeTest;
text
------------------------
This will be inserted
This will not be
(1 row)
milen=>
A: Partially. The text is inserted, but the warning is still generated.
I found a discussion that indicated the text needed to be preceded with 'E', as such:
insert into EscapeTest (text) values (E'This is the first part \n And this is the second');
This suppressed the warning, but the text was still not being returned correctly. When I added the additional slash as Michael suggested, it worked.
As such:
insert into EscapeTest (text) values (E'This is the first part \\n And this is the second');
A: Really stupid question: Are you sure the string is being truncated, and not just broken at the linebreak you specify (and possibly not showing in your interface)? Ie, do you expect the field to show as
This will be inserted \n This will not
be
or
This will be inserted
This will not be
Also, what interface are you using? Is it possible that something along the way is eating your backslashes?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "137"
} |
Q: Unhandled Exception Handler in .NET 1.1 I'm maintaining a .NET 1.1 application and one of the things I've been tasked with is making sure the user doesn't see any unfriendly error notifications.
I've added handlers to Application.ThreadException and AppDomain.CurrentDomain.UnhandledException, which do get called. My problem is that the standard CLR error dialog is still displayed (before the exception handler is called).
Jeff talks about this problem on his blog here and here. But there's no solution. So what is the standard way in .NET 1.1 to handle uncaught exceptions and display a friendly dialog box?
Jeff's response was marked as the correct answer because the link he provided has the most complete information on how to do what's required.
A: AppDomain.UnhandledException is an event, not a global exception handler. This means, by the time it is raised, your application is already on its way down the drain, and there is nothing you can do about it, except for doing cleanup and error logging.
What happened behind the scenes is this: The framework detected the exception, walked up the call stack to the very top, found no handlers that would recover from the error, so was unable to determine if it was safe to continue execution. So, it started the shutdown sequence and fired up this event as a courtesy to you so you can pay your respects to your already-doomed process. This happens when an exception is left unhandled in the main thread.
There is no single-point solution to this kind of error. You need to put a real exception handler (a catch block) upstream of all places where this error occurs and forward it to (for example) a global handler method/class that will determine if it is safe to simply report and continue, based on exception type and/or content.
Edit: It is possible to disable (=hack) the error-reporting mechanism built into Windows so the mandatory "crash and burn" dialog does not get displayed when your app goes down. However, this becomes effective for all the applications in the system, not just your own.
A: Unhandled exception behavior in a .NET 1.x Windows Forms application depends on:
*
*The type of thread that threw the exception
*Whether it occurred during window message processing
*Whether a debugger was attached to the process
*The DbgJitDebugLaunchSetting registry setting
*The jitDebugging flag in App.Config
*Whether you overrode the Windows Forms exception handler
*Whether you handled the CLR’s exception event
*The phase of the moon
The default behavior of unhandled exceptions is:
*
*If the exception occurs on the main thread when pumping window messages, it's intercepted by the Windows Forms exception handler.
*If the exception occurs on the main thread when pumping window messages, it will terminate the app process unless it's intercepted by the Windows Forms exception handler.
*If the exception occurs on a manual, thread-pool, or finalizer thread, it's swallowed by the CLR.
The points of contact for an unhandled exception are:
*
*Windows Forms exception handler.
*The JIT-debug registry switch DbgJitDebugLaunchSetting.
*The CLR unhandled exception event.
The Windows Form built-in exception handling does the following by default:
*
*Catches an unhandled exception when:
*
*exception is on main thread and no debugger attached.
*exception occurs during window message processing.
*jitDebugging = false in App.Config.
*Shows dialog to user and prevents app termination.
You can disable the latter behavior by setting jitDebugging = true in App.Config. But remember that this may be your last chance to stop app termination. So the next step to catch an unhandled exception is registering for event Application.ThreadException, e.g.:
Application.ThreadException += new
Threading.ThreadExceptionHandler(CatchFormsExceptions);
Note the registry setting DbgJitDebugLaunchSetting under HKEY_LOCAL_MACHINE\Software.NetFramework. This has one of three values of which I'm aware:
*
*0: shows user dialog asking "debug or terminate".
*1: lets exception through for CLR to deal with.
*2: launches debugger specified in DbgManagedDebugger registry key.
In Visual Studio, go to menu Tools → Options → Debugging → JIT to set this key to 0 or 2. But a value of 1 is usually best on an end-user's machine. Note that this registry key is acted on before the CLR unhandled exception event.
This last event is your last chance to log an unhandled exception. It's triggered before your Finally blocks have executed. You can intercept this event as follows:
AppDomain.CurrentDomain.UnhandledException += new
System.UnhandledExceptionEventHandler(CatchClrExceptions);
A: Is this a console application or a Windows Forms application? If it's a .NET 1.1 console application this is, sadly, by design -- it's confirmed by an MSFT dev in the second blog post you referenced:
BTW, on my 1.1 machine the example from MSDN does have the expected output; it's just that the second line doesn't show up until after you've attached a debugger (or not). In v2 we've flipped things around so that the UnhandledException event fires before the debugger attaches, which seems to be what most people expect.
It sounds like .NET 2.0 does this better (thank goodness), but honestly, I never had time to go back and check.
A: Oh, in Windows Forms you definitely should be able to get it to work. The only thing you have to watch out for is things happening on different threads.
I have an old Code Project article here which should help:
User Friendly Exception Handling
A: It's a Windows Forms application. The exceptions that are caught by Application.ThreadException work fine, and I don't get the ugly .NET exception box (OK to terminate, Cancel to debug? who came up with that??).
I was getting some exceptions that weren't being caught by that and ended up going to the AppDomain.UnhandledException event that were causing problems. I think I've caught most of those exceptions, and I am displaying them in our nice error box now.
So I'll just have to hope there are not some other circumstances that would cause exceptions to not be caught by the Application.ThreadException handler.
A: The Short Answer,
Looks like, an exception occurring in Form.Load doesn't get routed to Application.ThreadException or AppDomain.CurrentDomain.UnhandledException without a debugger attached.
The More accurate Answer/Story
This is how I solved a similar problem. I can't say for sure how it does it, but here is what I think. Improvement suggestions are welcome.
The three events,
*
*AppDomain.CurrentDomain.FirstChanceException
*AppDomain.CurrentDomain.UnhandledException
*and Application.ThreadException
accumulatively catch most of the exceptions but not on a global scope (as said earlier). In one of my applications, I used a combination of these to catch all kinds of exceptions and even the unmanaged code exceptions like DirectX exception (through SharpDX). All exceptions, whether they are caught or not, seem to be invoking FirstChanceException without a doubt.
AppDomain.CurrentDomain.FirstChanceException += MyFirstChanceExceptionHandler;
Application.SetUnhandledExceptionMode(UnhandledExceptionMode.CatchException); // not sure if this is important or not.
AppDomain.CurrentDomain.UnhandledException += CurrentDomain_UnhandledException; // can't use Lambda here. need to Unsub this event later.
Application.ThreadException += (s, e) => MyUnhandledExceptionHandler(e.Exception);
static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)
{
MyUnhandledExceptionHandler((Exception)e.ExceptionObject);
}
private void CurrentDomain_FirstChanceException(object sender, System.Runtime.ExceptionServices.FirstChanceExceptionEventArgs eventArgs)
{
// detect the pattern of the exception which we won't be able to get in Fatal events.
if (eventArgs.Exception.Message.StartsWith("HRESULT"))
MyUnhandledExceptionHandler(eventArgs.Exception);
}
and the handler looks like
static void MyUnhandledExceptionHandler(Exception ex)
{
AppDomain.CurrentDomain.UnhandledException -= MyUnhandledExceptionHandler; // this is important. Any exception occuring in the logging mechanism can cause a stack overflow exception which triggers the window's own JIT message/App crash message if Win JIT is not available.
// LogTheException()
// Collect user data
// inform the user in a civil way to restart/close the app
Environment.Exit(0);
}
Unmanaged code exceptions like DirectX exceptions appeared only in FirstChanceException where I had to decide for myself if the exception was fatal or not. I then use MyUnhandledExceptionHandler to log and let the user know in a friendly way that everything was "under control".
IMPORTANT NOTE!
The scheme still didn't catch one kind of exception. It did appear in FirstChanceException, but it was hard to distinguish it from other kinds of exceptions hitting this handler. Any exception occurring directly in Form.Load had this different behavior. When the VS debugger was attached, these were routed to the UnhandledException event. But without a debugger, an old-school windows message will pop up, showing the stack trace of the exception that occurred. The most annoying thing was that it didn't let MyUnhandledExceptionHandlerr get kicked once it was done and the app continued to work in an abnormal state. The final solution I did was to move all the code from Form_load to another thread using MyForm.Load += (s,e) => new Thread(()=>{/* My Form_Load code*/ }).Start();. This way, Application.ThreadException gets triggered which is routed to MyUnhandledExceptionHandler, my safe exit.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: How to use a mutex in Visual Basic I have imported the kernel32 library. So, I have the createMutex function available but I am not quite sure of the various parameters and return values.
This is classic Visual Basic, not Visual Basic.NET but I can probably work with either language in the form of an answer.
A: The VB code looks something like this:
hMutex = CreateMutex(ByVal 0&, 1, ByVal 0&)
The first parameter is a pointer to an SECURITY_ATTRIBUTES structure. If you don't know what it is, you don't need it. Pass NULL (0).
The second parameter is TRUE (non-zero, or 1) if the calling thread should take ownership of the mutex. FALSE otherwise.
The third parameter is the mutex name and may be NULL (0), as shown. If you need a named mutex, pass the name (anything unique) in. Not sure whether the VB wrapper marshals the length-prefixed VB string type (BSTR) over to a null-terminated Ascii/Unicode string if not, you'll need to do that and numerous examples are out there.
Good luck!
A: Here's the VB6 declarations for CreateMutex - I just copied them from the API viewer, which you should have as part of your VB6 installation. VB6 marshalls strings to null-terminated ANSI using the current code page.
Public Type SECURITY_ATTRIBUTES
nLength As Long
lpSecurityDescriptor As Long
bInheritHandle As Long
End Type
Public Declare Function CreateMutex Lib "kernel32" Alias "CreateMutexA" _
(lpMutexAttributes As SECURITY_ATTRIBUTES, ByVal bInitialOwner As Long, _
ByVal lpName As String) As Long
Bear in mind that if you create a mutex from the VB6 IDE, the mutex belongs to the IDE and won't be destroyed when you stop running your program - only when you close the IDE.
A: Well, based on the documentation it looks like:
*
*Security attributes (can pass null)
*Whether it's initially owned (can pass false)
*The name of it
HTH
| {
"language": "en",
"url": "https://stackoverflow.com/questions/947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Adding a method to an existing object instance in Python I've read that it is possible to add a method to an existing object (i.e., not in the class definition) in Python.
I understand that it's not always good to do so. But how might one do this?
A: Consolidating Jason Pratt's and the community wiki answers, with a look at the results of different methods of binding:
Especially note how adding the binding function as a class method works, but the referencing scope is incorrect.
#!/usr/bin/python -u
import types
import inspect
## dynamically adding methods to a unique instance of a class
# get a list of a class's method type attributes
def listattr(c):
for m in [(n, v) for n, v in inspect.getmembers(c, inspect.ismethod) if isinstance(v,types.MethodType)]:
print m[0], m[1]
# externally bind a function as a method of an instance of a class
def ADDMETHOD(c, method, name):
c.__dict__[name] = types.MethodType(method, c)
class C():
r = 10 # class attribute variable to test bound scope
def __init__(self):
pass
#internally bind a function as a method of self's class -- note that this one has issues!
def addmethod(self, method, name):
self.__dict__[name] = types.MethodType( method, self.__class__ )
# predfined function to compare with
def f0(self, x):
print 'f0\tx = %d\tr = %d' % ( x, self.r)
a = C() # created before modified instnace
b = C() # modified instnace
def f1(self, x): # bind internally
print 'f1\tx = %d\tr = %d' % ( x, self.r )
def f2( self, x): # add to class instance's .__dict__ as method type
print 'f2\tx = %d\tr = %d' % ( x, self.r )
def f3( self, x): # assign to class as method type
print 'f3\tx = %d\tr = %d' % ( x, self.r )
def f4( self, x): # add to class instance's .__dict__ using a general function
print 'f4\tx = %d\tr = %d' % ( x, self.r )
b.addmethod(f1, 'f1')
b.__dict__['f2'] = types.MethodType( f2, b)
b.f3 = types.MethodType( f3, b)
ADDMETHOD(b, f4, 'f4')
b.f0(0) # OUT: f0 x = 0 r = 10
b.f1(1) # OUT: f1 x = 1 r = 10
b.f2(2) # OUT: f2 x = 2 r = 10
b.f3(3) # OUT: f3 x = 3 r = 10
b.f4(4) # OUT: f4 x = 4 r = 10
k = 2
print 'changing b.r from {0} to {1}'.format(b.r, k)
b.r = k
print 'new b.r = {0}'.format(b.r)
b.f0(0) # OUT: f0 x = 0 r = 2
b.f1(1) # OUT: f1 x = 1 r = 10 !!!!!!!!!
b.f2(2) # OUT: f2 x = 2 r = 2
b.f3(3) # OUT: f3 x = 3 r = 2
b.f4(4) # OUT: f4 x = 4 r = 2
c = C() # created after modifying instance
# let's have a look at each instance's method type attributes
print '\nattributes of a:'
listattr(a)
# OUT:
# attributes of a:
# __init__ <bound method C.__init__ of <__main__.C instance at 0x000000000230FD88>>
# addmethod <bound method C.addmethod of <__main__.C instance at 0x000000000230FD88>>
# f0 <bound method C.f0 of <__main__.C instance at 0x000000000230FD88>>
print '\nattributes of b:'
listattr(b)
# OUT:
# attributes of b:
# __init__ <bound method C.__init__ of <__main__.C instance at 0x000000000230FE08>>
# addmethod <bound method C.addmethod of <__main__.C instance at 0x000000000230FE08>>
# f0 <bound method C.f0 of <__main__.C instance at 0x000000000230FE08>>
# f1 <bound method ?.f1 of <class __main__.C at 0x000000000237AB28>>
# f2 <bound method ?.f2 of <__main__.C instance at 0x000000000230FE08>>
# f3 <bound method ?.f3 of <__main__.C instance at 0x000000000230FE08>>
# f4 <bound method ?.f4 of <__main__.C instance at 0x000000000230FE08>>
print '\nattributes of c:'
listattr(c)
# OUT:
# attributes of c:
# __init__ <bound method C.__init__ of <__main__.C instance at 0x0000000002313108>>
# addmethod <bound method C.addmethod of <__main__.C instance at 0x0000000002313108>>
# f0 <bound method C.f0 of <__main__.C instance at 0x0000000002313108>>
Personally, I prefer the external ADDMETHOD function route, as it allows me to dynamically assign new method names within an iterator as well.
def y(self, x):
pass
d = C()
for i in range(1,5):
ADDMETHOD(d, y, 'f%d' % i)
print '\nattributes of d:'
listattr(d)
# OUT:
# attributes of d:
# __init__ <bound method C.__init__ of <__main__.C instance at 0x0000000002303508>>
# addmethod <bound method C.addmethod of <__main__.C instance at 0x0000000002303508>>
# f0 <bound method C.f0 of <__main__.C instance at 0x0000000002303508>>
# f1 <bound method ?.y of <__main__.C instance at 0x0000000002303508>>
# f2 <bound method ?.y of <__main__.C instance at 0x0000000002303508>>
# f3 <bound method ?.y of <__main__.C instance at 0x0000000002303508>>
# f4 <bound method ?.y of <__main__.C instance at 0x0000000002303508>>
A: Since this question asked for non-Python versions, here's JavaScript:
a.methodname = function () { console.log("Yay, a new method!") }
A: This is actually an addon to the answer of "Jason Pratt"
Although Jasons answer works, it does only work if one wants to add a function to a class.
It did not work for me when I tried to reload an already existing method from the .py source code file.
It took me for ages to find a workaround, but the trick seems simple...
1.st import the code from the source code file
2.nd force a reload
3.rd use types.FunctionType(...) to convert the imported and bound method to a function
you can also pass on the current global variables, as the reloaded method would be in a different namespace
4.th now you can continue as suggested by "Jason Pratt"
using the types.MethodType(...)
Example:
# this class resides inside ReloadCodeDemo.py
class A:
def bar( self ):
print "bar1"
def reloadCode(self, methodName):
''' use this function to reload any function of class A'''
import types
import ReloadCodeDemo as ReloadMod # import the code as module
reload (ReloadMod) # force a reload of the module
myM = getattr(ReloadMod.A,methodName) #get reloaded Method
myTempFunc = types.FunctionType(# convert the method to a simple function
myM.im_func.func_code, #the methods code
globals(), # globals to use
argdefs=myM.im_func.func_defaults # default values for variables if any
)
myNewM = types.MethodType(myTempFunc,self,self.__class__) #convert the function to a method
setattr(self,methodName,myNewM) # add the method to the function
if __name__ == '__main__':
a = A()
a.bar()
# now change your code and save the file
a.reloadCode('bar') # reloads the file
a.bar() # now executes the reloaded code
A: This question was opened years ago, but hey, there's an easy way to simulate the binding of a function to a class instance using decorators:
def binder (function, instance):
copy_of_function = type (function) (function.func_code, {})
copy_of_function.__bind_to__ = instance
def bound_function (*args, **kwargs):
return copy_of_function (copy_of_function.__bind_to__, *args, **kwargs)
return bound_function
class SupaClass (object):
def __init__ (self):
self.supaAttribute = 42
def new_method (self):
print self.supaAttribute
supaInstance = SupaClass ()
supaInstance.supMethod = binder (new_method, supaInstance)
otherInstance = SupaClass ()
otherInstance.supaAttribute = 72
otherInstance.supMethod = binder (new_method, otherInstance)
otherInstance.supMethod ()
supaInstance.supMethod ()
There, when you pass the function and the instance to the binder decorator, it will create a new function, with the same code object as the first one. Then, the given instance of the class is stored in an attribute of the newly created function. The decorator return a (third) function calling automatically the copied function, giving the instance as the first parameter.
In conclusion you get a function simulating it's binding to the class instance. Letting the original function unchanged.
A: I find it strange that nobody mentioned that all of the methods listed above creates a cycle reference between the added method and the instance, causing the object to be persistent till garbage collection. There was an old trick adding a descriptor by extending the class of the object:
def addmethod(obj, name, func):
klass = obj.__class__
subclass = type(klass.__name__, (klass,), {})
setattr(subclass, name, func)
obj.__class__ = subclass
A: I think that the above answers missed the key point.
Let's have a class with a method:
class A(object):
def m(self):
pass
Now, let's play with it in ipython:
In [2]: A.m
Out[2]: <unbound method A.m>
Ok, so m() somehow becomes an unbound method of A. But is it really like that?
In [5]: A.__dict__['m']
Out[5]: <function m at 0xa66b8b4>
It turns out that m() is just a function, reference to which is added to A class dictionary - there's no magic. Then why A.m gives us an unbound method? It's because the dot is not translated to a simple dictionary lookup. It's de facto a call of A.__class__.__getattribute__(A, 'm'):
In [11]: class MetaA(type):
....: def __getattribute__(self, attr_name):
....: print str(self), '-', attr_name
In [12]: class A(object):
....: __metaclass__ = MetaA
In [23]: A.m
<class '__main__.A'> - m
<class '__main__.A'> - m
Now, I'm not sure out of the top of my head why the last line is printed twice, but still it's clear what's going on there.
Now, what the default __getattribute__ does is that it checks if the attribute is a so-called descriptor or not, i.e. if it implements a special __get__ method. If it implements that method, then what is returned is the result of calling that __get__ method. Going back to the first version of our A class, this is what we have:
In [28]: A.__dict__['m'].__get__(None, A)
Out[28]: <unbound method A.m>
And because Python functions implement the descriptor protocol, if they are called on behalf of an object, they bind themselves to that object in their __get__ method.
Ok, so how to add a method to an existing object? Assuming you don't mind patching class, it's as simple as:
B.m = m
Then B.m "becomes" an unbound method, thanks to the descriptor magic.
And if you want to add a method just to a single object, then you have to emulate the machinery yourself, by using types.MethodType:
b.m = types.MethodType(m, b)
By the way:
In [2]: A.m
Out[2]: <unbound method A.m>
In [59]: type(A.m)
Out[59]: <type 'instancemethod'>
In [60]: type(b.m)
Out[60]: <type 'instancemethod'>
In [61]: types.MethodType
Out[61]: <type 'instancemethod'>
A: If it can be of any help, I recently released a Python library named Gorilla to make the process of monkey patching more convenient.
Using a function needle() to patch a module named guineapig goes as follows:
import gorilla
import guineapig
@gorilla.patch(guineapig)
def needle():
print("awesome")
But it also takes care of more interesting use cases as shown in the FAQ from the documentation.
The code is available on GitHub.
A: from types import MethodType
def method(self):
print 'hi!'
setattr( targetObj, method.__name__, MethodType(method, targetObj, type(method)) )
With this, you can use the self pointer
A: In Python monkeypatching generally works by overwriting a class or function's signature with your own. Below is an example from the Zope Wiki:
from SomeOtherProduct.SomeModule import SomeClass
def speak(self):
return "ook ook eee eee eee!"
SomeClass.speak = speak
This code will overwrite/create a method called speak in the class. In Jeff Atwood's recent post on monkey patching, he showed an example in C# 3.0 which is the current language I use for work.
A: What Jason Pratt posted is correct.
>>> class Test(object):
... def a(self):
... pass
...
>>> def b(self):
... pass
...
>>> Test.b = b
>>> type(b)
<type 'function'>
>>> type(Test.a)
<type 'instancemethod'>
>>> type(Test.b)
<type 'instancemethod'>
As you can see, Python doesn't consider b() any different than a(). In Python all methods are just variables that happen to be functions.
A: You can use lambda to bind a method to an instance:
def run(self):
print self._instanceString
class A(object):
def __init__(self):
self._instanceString = "This is instance string"
a = A()
a.run = lambda: run(a)
a.run()
Output:
This is instance string
A: Preface - a note on compatibility: other answers may only work in Python 2 - this answer should work perfectly well in Python 2 and 3. If writing Python 3 only, you might leave out explicitly inheriting from object, but otherwise the code should remain the same.
Adding a Method to an Existing Object Instance
I've read that it is possible to add a method to an existing object (e.g. not in the class definition) in Python.
I understand that it's not always a good decision to do so. But, how might one do this?
Yes, it is possible - But not recommended
I don't recommend this. This is a bad idea. Don't do it.
Here's a couple of reasons:
*
*You'll add a bound object to every instance you do this to. If you do this a lot, you'll probably waste a lot of memory. Bound methods are typically only created for the short duration of their call, and they then cease to exist when automatically garbage collected. If you do this manually, you'll have a name binding referencing the bound method - which will prevent its garbage collection on usage.
*Object instances of a given type generally have its methods on all objects of that type. If you add methods elsewhere, some instances will have those methods and others will not. Programmers will not expect this, and you risk violating the rule of least surprise.
*Since there are other really good reasons not to do this, you'll additionally give yourself a poor reputation if you do it.
Thus, I suggest that you not do this unless you have a really good reason. It is far better to define the correct method in the class definition or less preferably to monkey-patch the class directly, like this:
Foo.sample_method = sample_method
Since it's instructive, however, I'm going to show you some ways of doing this.
How it can be done
Here's some setup code. We need a class definition. It could be imported, but it really doesn't matter.
class Foo(object):
'''An empty class to demonstrate adding a method to an instance'''
Create an instance:
foo = Foo()
Create a method to add to it:
def sample_method(self, bar, baz):
print(bar + baz)
Method nought (0) - use the descriptor method, __get__
Dotted lookups on functions call the __get__ method of the function with the instance, binding the object to the method and thus creating a "bound method."
foo.sample_method = sample_method.__get__(foo)
and now:
>>> foo.sample_method(1,2)
3
Method one - types.MethodType
First, import types, from which we'll get the method constructor:
import types
Now we add the method to the instance. To do this, we require the MethodType constructor from the types module (which we imported above).
The argument signature for types.MethodType (in Python 3) is (function, instance):
foo.sample_method = types.MethodType(sample_method, foo)
and usage:
>>> foo.sample_method(1,2)
3
Parenthetically, in Python 2 the signature was (function, instance, class):
foo.sample_method = types.MethodType(sample_method, foo, Foo)
Method two: lexical binding
First, we create a wrapper function that binds the method to the instance:
def bind(instance, method):
def binding_scope_fn(*args, **kwargs):
return method(instance, *args, **kwargs)
return binding_scope_fn
usage:
>>> foo.sample_method = bind(foo, sample_method)
>>> foo.sample_method(1,2)
3
Method three: functools.partial
A partial function applies the first argument(s) to a function (and optionally keyword arguments), and can later be called with the remaining arguments (and overriding keyword arguments). Thus:
>>> from functools import partial
>>> foo.sample_method = partial(sample_method, foo)
>>> foo.sample_method(1,2)
3
This makes sense when you consider that bound methods are partial functions of the instance.
Unbound function as an object attribute - why this doesn't work:
If we try to add the sample_method in the same way as we might add it to the class, it is unbound from the instance, and doesn't take the implicit self as the first argument.
>>> foo.sample_method = sample_method
>>> foo.sample_method(1,2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: sample_method() takes exactly 3 arguments (2 given)
We can make the unbound function work by explicitly passing the instance (or anything, since this method doesn't actually use the self argument variable), but it would not be consistent with the expected signature of other instances (if we're monkey-patching this instance):
>>> foo.sample_method(foo, 1, 2)
3
Conclusion
You now know several ways you could do this, but in all seriousness - don't do this.
A: What you're looking for is setattr I believe.
Use this to set an attribute on an object.
>>> def printme(s): print repr(s)
>>> class A: pass
>>> setattr(A,'printme',printme)
>>> a = A()
>>> a.printme() # s becomes the implicit 'self' variable
< __ main __ . A instance at 0xABCDEFG>
A: In Python, there is a difference between functions and bound methods.
>>> def foo():
... print "foo"
...
>>> class A:
... def bar( self ):
... print "bar"
...
>>> a = A()
>>> foo
<function foo at 0x00A98D70>
>>> a.bar
<bound method A.bar of <__main__.A instance at 0x00A9BC88>>
>>>
Bound methods have been "bound" (how descriptive) to an instance, and that instance will be passed as the first argument whenever the method is called.
Callables that are attributes of a class (as opposed to an instance) are still unbound, though, so you can modify the class definition whenever you want:
>>> def fooFighters( self ):
... print "fooFighters"
...
>>> A.fooFighters = fooFighters
>>> a2 = A()
>>> a2.fooFighters
<bound method A.fooFighters of <__main__.A instance at 0x00A9BEB8>>
>>> a2.fooFighters()
fooFighters
Previously defined instances are updated as well (as long as they haven't overridden the attribute themselves):
>>> a.fooFighters()
fooFighters
The problem comes when you want to attach a method to a single instance:
>>> def barFighters( self ):
... print "barFighters"
...
>>> a.barFighters = barFighters
>>> a.barFighters()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: barFighters() takes exactly 1 argument (0 given)
The function is not automatically bound when it's attached directly to an instance:
>>> a.barFighters
<function barFighters at 0x00A98EF0>
To bind it, we can use the MethodType function in the types module:
>>> import types
>>> a.barFighters = types.MethodType( barFighters, a )
>>> a.barFighters
<bound method ?.barFighters of <__main__.A instance at 0x00A9BC88>>
>>> a.barFighters()
barFighters
This time other instances of the class have not been affected:
>>> a2.barFighters()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: A instance has no attribute 'barFighters'
More information can be found by reading about descriptors and metaclass programming.
A: There are at least two ways for attach a method to an instance without types.MethodType:
>>> class A:
... def m(self):
... print 'im m, invoked with: ', self
>>> a = A()
>>> a.m()
im m, invoked with: <__main__.A instance at 0x973ec6c>
>>> a.m
<bound method A.m of <__main__.A instance at 0x973ec6c>>
>>>
>>> def foo(firstargument):
... print 'im foo, invoked with: ', firstargument
>>> foo
<function foo at 0x978548c>
1:
>>> a.foo = foo.__get__(a, A) # or foo.__get__(a, type(a))
>>> a.foo()
im foo, invoked with: <__main__.A instance at 0x973ec6c>
>>> a.foo
<bound method A.foo of <__main__.A instance at 0x973ec6c>>
2:
>>> instancemethod = type(A.m)
>>> instancemethod
<type 'instancemethod'>
>>> a.foo2 = instancemethod(foo, a, type(a))
>>> a.foo2()
im foo, invoked with: <__main__.A instance at 0x973ec6c>
>>> a.foo2
<bound method instance.foo of <__main__.A instance at 0x973ec6c>>
Useful links:
Data model - invoking descriptors
Descriptor HowTo Guide - invoking descriptors
A: Module new is deprecated since python 2.6 and removed in 3.0, use types
see http://docs.python.org/library/new.html
In the example below I've deliberately removed return value from patch_me() function.
I think that giving return value may make one believe that patch returns a new object, which is not true - it modifies the incoming one. Probably this can facilitate a more disciplined use of monkeypatching.
import types
class A(object):#but seems to work for old style objects too
pass
def patch_me(target):
def method(target,x):
print "x=",x
print "called from", target
target.method = types.MethodType(method,target)
#add more if needed
a = A()
print a
#out: <__main__.A object at 0x2b73ac88bfd0>
patch_me(a) #patch instance
a.method(5)
#out: x= 5
#out: called from <__main__.A object at 0x2b73ac88bfd0>
patch_me(A)
A.method(6) #can patch class too
#out: x= 6
#out: called from <class '__main__.A'>
A: Thanks to Arturo!
Your answer got me on the right track!
Based on Arturo's code, I wrote a little class:
from types import MethodType
import re
from string import ascii_letters
class DynamicAttr:
def __init__(self):
self.dict_all_files = {}
def _copy_files(self, *args, **kwargs):
print(f'copy {args[0]["filename"]} {args[0]["copy_command"]}')
def _delete_files(self, *args, **kwargs):
print(f'delete {args[0]["filename"]} {args[0]["delete_command"]}')
def _create_properties(self):
for key, item in self.dict_all_files.items():
setattr(
self,
key,
self.dict_all_files[key],
)
setattr(
self,
key + "_delete",
MethodType(
self._delete_files,
{
"filename": key,
"delete_command": f'del {item}',
},
),
)
setattr(
self,
key + "_copy",
MethodType(
self._copy_files,
{
"filename": key,
"copy_command": f'copy {item}',
},
),
)
def add_files_to_class(self, filelist: list):
for _ in filelist:
attr_key = re.sub(rf'[^{ascii_letters}]+', '_', _).strip('_')
self.dict_all_files[attr_key] = _
self._create_properties()
dy = DynamicAttr()
dy.add_files_to_class([r"C:\Windows\notepad.exe", r"C:\Windows\regedit.exe"])
dy.add_files_to_class([r"C:\Windows\HelpPane.exe", r"C:\Windows\win.ini"])
#output
print(dy.C_Windows_HelpPane_exe)
dy.C_Windows_notepad_exe_delete()
dy.C_Windows_HelpPane_exe_copy()
C:\Windows\HelpPane.exe
delete C_Windows_notepad_exe del C:\Windows\notepad.exe
copy C_Windows_HelpPane_exe copy C:\Windows\HelpPane.exe
This class allows you to add new attributes and methods at any time.
Edit:
Here is a more generalized solution:
import inspect
import re
from copy import deepcopy
from string import ascii_letters
def copy_func(f):
if callable(f):
if inspect.ismethod(f) or inspect.isfunction(f):
g = lambda *args, **kwargs: f(*args, **kwargs)
t = list(filter(lambda prop: not ("__" in prop), dir(f)))
i = 0
while i < len(t):
setattr(g, t[i], getattr(f, t[i]))
i += 1
return g
dcoi = deepcopy([f])
return dcoi[0]
class FlexiblePartial:
def __init__(self, func, this_args_first, *args, **kwargs):
try:
self.f = copy_func(func) # create a copy of the function
except Exception:
self.f = func
self.this_args_first = this_args_first # where should the other (optional) arguments be that are passed when the function is called
try:
self.modulename = args[0].__class__.__name__ # to make repr look good
except Exception:
self.modulename = "self"
try:
self.functionname = func.__name__ # to make repr look good
except Exception:
try:
self.functionname = func.__qualname__ # to make repr look good
except Exception:
self.functionname = "func"
self.args = args
self.kwargs = kwargs
self.name_to_print = self._create_name() # to make repr look good
def _create_name(self):
stra = self.modulename + "." + self.functionname + "(self, "
for _ in self.args[1:]:
stra = stra + repr(_) + ", "
for key, item in self.kwargs.items():
stra = stra + str(key) + "=" + repr(item) + ", "
stra = stra.rstrip().rstrip(",")
stra += ")"
if len(stra) > 100:
stra = stra[:95] + "...)"
return stra
def __call__(self, *args, **kwargs):
newdic = {}
newdic.update(self.kwargs)
newdic.update(kwargs)
if self.this_args_first:
return self.f(*self.args[1:], *args, **newdic)
else:
return self.f(*args, *self.args[1:], **newdic)
def __str__(self):
return self.name_to_print
def __repr__(self):
return self.__str__()
class AddMethodsAndProperties:
def add_methods(self, dict_to_add):
for key_, item in dict_to_add.items():
key = re.sub(rf"[^{ascii_letters}]+", "_", str(key_)).rstrip("_")
if isinstance(item, dict):
if "function" in item: # for adding methods
if not isinstance(
item["function"], str
): # for external functions that are not part of the class
setattr(
self,
key,
FlexiblePartial(
item["function"],
item["this_args_first"],
self,
*item["args"],
**item["kwargs"],
),
)
else:
setattr(
self,
key,
FlexiblePartial(
getattr(
self, item["function"]
), # for internal functions - part of the class
item["this_args_first"],
self,
*item["args"],
**item["kwargs"],
),
)
else: # for adding props
setattr(self, key, item)
Let's test it:
class NewClass(AddMethodsAndProperties): #inherit from AddMethodsAndProperties to add the method add_methods
def __init__(self):
self.bubu = 5
def _delete_files(self, file): #some random methods
print(f"File will be deleted: {file}")
def delete_files(self, file):
self._delete_files(file)
def _copy_files(self, file, dst):
print(f"File will be copied: {file} Dest: {dst}")
def copy_files(self, file, dst):
self._copy_files(file, dst)
def _create_files(self, file, folder):
print(f"File will be created: {file} {folder}")
def create_files(self, file, folder):
self._create_files(file, folder)
def method_with_more_kwargs(self, file, folder, one_more):
print(file, folder, one_more)
return self
nc = NewClass()
dict_all_files = {
r"C:\Windows\notepad.exe_delete": {
"function": "delete_files",
"args": (),
"kwargs": {"file": r"C:\Windows\notepad.exe"},
"this_args_first": True,
},
r"C:\Windows\notepad.exe_argsfirst": {
"function": "delete_files",
"args": (),
"kwargs": {"file": r"C:\Windows\notepad.exe"},
"this_args_first": True,
},
r"C:\Windows\notepad.exe_copy": {
"function": "copy_files",
"args": (),
"kwargs": {
"file": r"C:\Windows\notepad.exe",
"dst": r"C:\Windows\notepad555.exe",
},
"this_args_first": True,
},
r"C:\Windows\notepad.exe_create": {
"function": "create_files",
"args": (),
"kwargs": {"file": r"C:\Windows\notepad.exe", "folder": "c:\\windows95"},
"this_args_first": True,
},
r"C:\Windows\notepad.exe_upper": {
"function": str.upper,
"args": (r"C:\Windows\notepad.exe",),
"kwargs": {},
"this_args_first": True,
},
r"C:\Windows\notepad.exe_method_with_more_kwargs": {
"function": "method_with_more_kwargs",
"args": (),
"kwargs": {"file": r"C:\Windows\notepad.exe", "folder": "c:\\windows95"},
"this_args_first": True,
},
r"C:\Windows\notepad.exe_method_with_more_kwargs_as_args_first": {
"function": "method_with_more_kwargs",
"args": (r"C:\Windows\notepad.exe", "c:\\windows95"),
"kwargs": {},
"this_args_first": True,
},
r"C:\Windows\notepad.exe_method_with_more_kwargs_as_args_last": {
"function": "method_with_more_kwargs",
"args": (r"C:\Windows\notepad.exe", "c:\\windows95"),
"kwargs": {},
"this_args_first": False,
},
"this_is_a_list": [55, 3, 3, 1, 4, 43],
}
nc.add_methods(dict_all_files)
print(nc.C_Windows_notepad_exe_delete)
print(nc.C_Windows_notepad_exe_delete(), end="\n\n")
print(nc.C_Windows_notepad_exe_argsfirst)
print(nc.C_Windows_notepad_exe_argsfirst(), end="\n\n")
print(nc.C_Windows_notepad_exe_copy)
print(nc.C_Windows_notepad_exe_copy(), end="\n\n")
print(nc.C_Windows_notepad_exe_create)
print(nc.C_Windows_notepad_exe_create(), end="\n\n")
print(nc.C_Windows_notepad_exe_upper)
print(nc.C_Windows_notepad_exe_upper(), end="\n\n")
print(nc.C_Windows_notepad_exe_method_with_more_kwargs)
print(
nc.C_Windows_notepad_exe_method_with_more_kwargs(
one_more="f:\\blaaaaaaaaaaaaaaaaaaaaaaaa"
)
.C_Windows_notepad_exe_method_with_more_kwargs(
one_more="f:\\ASJVASDFASÇDFJASÇDJFÇASWFJASÇ"
)
.C_Windows_notepad_exe_method_with_more_kwargs(
one_more="f:\\XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
),
end="\n\n",
)
print(nc.C_Windows_notepad_exe_method_with_more_kwargs_as_args_first)
print(
nc.C_Windows_notepad_exe_method_with_more_kwargs_as_args_first(
"f:\\blaaaaaaaaaaaaaaaaaaaaaaaa"
),
end="\n\n",
)
print(
nc.C_Windows_notepad_exe_method_with_more_kwargs_as_args_first(
"f:\\blaaaaaaaaaaaaaaaaaaaaaaaa"
)
.C_Windows_notepad_exe_method_with_more_kwargs_as_args_first(
"f:\\ASJVASDFASÇDFJASÇDJFÇASWFJASÇ"
)
.C_Windows_notepad_exe_method_with_more_kwargs_as_args_first(
"f:\\XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
),
end="\n\n",
)
print(nc.C_Windows_notepad_exe_method_with_more_kwargs_as_args_last)
print(
nc.C_Windows_notepad_exe_method_with_more_kwargs_as_args_last(
"f:\\blaaaaaaaaaaaaaaaaaaaaaaaa"
)
.C_Windows_notepad_exe_method_with_more_kwargs_as_args_last(
"f:\\ASJVASDFASÇDFJASÇDJFÇASWFJASÇ"
)
.C_Windows_notepad_exe_method_with_more_kwargs_as_args_last(
"f:\\XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
),
end="\n\n",
)
print(
nc.C_Windows_notepad_exe_method_with_more_kwargs_as_args_last(
"f:\\blaaaaaaaaaaaaaaaaaaaaaaaa"
)
.C_Windows_notepad_exe_method_with_more_kwargs_as_args_last(
"f:\\ASJVASDFASÇDFJASÇDJFÇASWFJASÇ"
)
.C_Windows_notepad_exe_method_with_more_kwargs_as_args_last(
"f:\\XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
),
end="\n\n",
)
print(nc.this_is_a_list)
checkit = (
nc.C_Windows_notepad_exe_method_with_more_kwargs_as_args_last(
"f:\\blaaaaaaaaaaaaaaaaaaaaaaaa"
)
.C_Windows_notepad_exe_method_with_more_kwargs_as_args_last(
"f:\\ASJVASDFASÇDFJASÇDJFÇASWFJASÇ"
)
.C_Windows_notepad_exe_method_with_more_kwargs_as_args_last(
"f:\\XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
)
)
print(f'nc is checkit? -> {nc is checkit}')
#output:
NewClass.delete_files(self, file='C:\\Windows\\notepad.exe')
File will be deleted: C:\Windows\notepad.exe
None
NewClass.delete_files(self, file='C:\\Windows\\notepad.exe')
File will be deleted: C:\Windows\notepad.exe
None
NewClass.copy_files(self, file='C:\\Windows\\notepad.exe', dst='C:\\Windows\\notepad555.exe')
File will be copied: C:\Windows\notepad.exe Dest: C:\Windows\notepad555.exe
None
NewClass.create_files(self, file='C:\\Windows\\notepad.exe', folder='c:\\windows95')
File will be created: C:\Windows\notepad.exe c:\windows95
None
NewClass.upper(self, 'C:\\Windows\\notepad.exe')
C:\WINDOWS\NOTEPAD.EXE
NewClass.method_with_more_kwargs(self, file='C:\\Windows\\notepad.exe', folder='c:\\windows95')
C:\Windows\notepad.exe c:\windows95 f:\blaaaaaaaaaaaaaaaaaaaaaaaa
C:\Windows\notepad.exe c:\windows95 f:\ASJVASDFASÇDFJASÇDJFÇASWFJASÇ
C:\Windows\notepad.exe c:\windows95 f:\XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
<__main__.NewClass object at 0x0000000005F199A0>
NewClass.method_with_more_kwargs(self, 'C:\\Windows\\notepad.exe', 'c:\\windows95')
C:\Windows\notepad.exe c:\windows95 f:\blaaaaaaaaaaaaaaaaaaaaaaaa
<__main__.NewClass object at 0x0000000005F199A0>
C:\Windows\notepad.exe c:\windows95 f:\blaaaaaaaaaaaaaaaaaaaaaaaa
C:\Windows\notepad.exe c:\windows95 f:\ASJVASDFASÇDFJASÇDJFÇASWFJASÇ
C:\Windows\notepad.exe c:\windows95 f:\XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
<__main__.NewClass object at 0x0000000005F199A0>
NewClass.method_with_more_kwargs(self, 'C:\\Windows\\notepad.exe', 'c:\\windows95')
f:\blaaaaaaaaaaaaaaaaaaaaaaaa C:\Windows\notepad.exe c:\windows95
f:\ASJVASDFASÇDFJASÇDJFÇASWFJASÇ C:\Windows\notepad.exe c:\windows95
f:\XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX C:\Windows\notepad.exe c:\windows95
<__main__.NewClass object at 0x0000000005F199A0>
f:\blaaaaaaaaaaaaaaaaaaaaaaaa C:\Windows\notepad.exe c:\windows95
f:\ASJVASDFASÇDFJASÇDJFÇASWFJASÇ C:\Windows\notepad.exe c:\windows95
f:\XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX C:\Windows\notepad.exe c:\windows95
<__main__.NewClass object at 0x0000000005F199A0>
[55, 3, 3, 1, 4, 43]
f:\blaaaaaaaaaaaaaaaaaaaaaaaa C:\Windows\notepad.exe c:\windows95
f:\ASJVASDFASÇDFJASÇDJFÇASWFJASÇ C:\Windows\notepad.exe c:\windows95
f:\XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX C:\Windows\notepad.exe c:\windows95
nc is checkit? -> True
A: How to recover a class from an instance of a class
class UnderWater:
def __init__(self):
self.net = 'underwater'
marine = UnderWater() # Instantiate the class
# Recover the class from the instance and add attributes to it.
class SubMarine(marine.__class__):
def __init__(self):
super().__init__()
self.sound = 'Sonar'
print(SubMarine, SubMarine.__name__, SubMarine().net, SubMarine().sound)
# Output
# (__main__.SubMarine,'SubMarine', 'underwater', 'Sonar')
A: Apart from what others said, I found that __repr__ and __str__ methods can't be monkeypatched on object level, because repr() and str() use class-methods, not locally-bounded object methods:
# Instance monkeypatch
[ins] In [55]: x.__str__ = show.__get__(x)
[ins] In [56]: x
Out[56]: <__main__.X at 0x7fc207180c10>
[ins] In [57]: str(x)
Out[57]: '<__main__.X object at 0x7fc207180c10>'
[ins] In [58]: x.__str__()
Nice object!
# Class monkeypatch
[ins] In [62]: X.__str__ = lambda _: "From class"
[ins] In [63]: str(x)
Out[63]: 'From class'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "826"
} |
Q: How do you get sudo access for a file inside the vi text editor? Often while editing config files, I'll open one with vi and then when I go to save it realize that I didn't type
sudo vi filename
Is there any way to give vi sudo privileges to save the file? I seem to recall seeing something about this while looking up some stuff about vi a while ago, but now I can't find it.
A: Ryan's advice is generally good, however, if following step 3, don't move the temporary file; it'll have the wrong ownership and permissions. Instead, sudoedit the correct file and read in the contents (using :r or the like) of the temporary file.
If following step 2, use :w! to force the file to be written.
A: In general, you can't change the effective user id of the vi process, but you can do this:
:w !sudo tee myfile
A: % is replaced with the current file name, thus you can use:
:w !sudo tee %
(vim will detect that the file has been changed and ask whether you want to it to be reloaded. Say yes by choosing [L] rather than OK.)
As a shortcut, you can define your own command. Put the following in your .vimrc:
command W w !sudo tee % >/dev/null
With the above you can type :W<Enter> to save the file. Since I wrote this, I have found a nicer way (in my opinion) to do this:
cmap w!! w !sudo tee >/dev/null %
This way you can type :w!! and it will be expanded to the full command line, leaving the cursor at the end, so you can replace the % with a file name of your own, if you like.
A: When you go into insert mode on a file you need sudo access to edit, you get a status message saying
-- INSERT -- W10: Warning: Changing a readonly file
If I miss that, generally I do
:w ~/edited_blah.tmp
:q
..then..
sudo "cat edited_blah.tmp > /etc/blah"
..or..
sudo mv edited_blah.tmp /etc/blah
There's probably a less roundabout way to do it, but it works.
A: Common Caveats
The most common method of getting around the read-only file problem is to open a pipe to current file as the super-user using an implementation of sudo tee. However, all of the most popular solutions that I have found around the Internet have a combination of a several potential caveats:
*
*The entire file gets written to the terminal, as well as the file. This can be slow for large files, especially over slow network connections.
*The file loses its modes and similar attributes.
*File paths with unusual characters or spaces might not be handled correctly.
Solutions
To get around all of these issues, you can use the following command:
" On POSIX (Linux/Mac/BSD):
:silent execute 'write !sudo tee ' . shellescape(@%, 1) . ' >/dev/null'
" Depending on the implementation, you might need this on Windows:
:silent execute 'write !sudo tee ' . shellescape(@%, 1) . ' >NUL'
These can be shortened, respectfully:
:sil exec 'w !sudo tee ' . shellescape(@%, 1) . ' >/dev/null'
:sil exec 'w !sudo tee ' . shellescape(@%, 1) . ' >NUL'
Explanation
: begins the command; you will need to type this character in normal mode to start entering a command. It should be omitted in scripts.
sil[ent] suppresses output from the command. In this case, we want to stop the Press any key to continue-like prompt that appears after running the :! command.
exec[ute] executes a string as a command. We can't just run :write because it won't process the necessary function call.
! represents the :! command: the only command that :write accepts. Normally, :write accepts a file path to which to write. :! on its own runs a command in a shell (for example, using bash -c). With :write, it will run the command in the shell, and then write the entire file to stdin.
sudo should be obvious, since that's why you're here. Run the command as the super-user. There's plenty of information around the 'net about how that works.
tee pipes stdin to the given file. :write will write to stdin, then the super-user tee will receive the file contents and write the file. It won't create a new file--just overwrite the contents--so file modes and attributes will be preserved.
shellescape() escapes special characters in the given file path as appropriate for the current shell. With just one parameter, it would typically just enclose the path in quotes as necessary. Since we're sending to a full shell command line, we'll want to pass a non-zero value as the second argument to enable backslash-escaping of other special characters that might otherwise trip up the shell.
@% reads the contents of the % register, which contains the current buffer's file name. It's not necessarily an absolute path, so ensure that you haven't changed the current directory. In some solutions, you will see the commercial-at symbol omitted. Depending on the location, % is a valid expression, and has the same effect as reading the % register. Nested inside another expression the shortcut is generally disallowed, however: such as in this case.
>NUL and >/dev/null redirect stdout to the platform's null device. Even though we've silenced the command, we don't want all of the overhead associated with piping stdin back to vim--best to dump it as early as possible. NUL is the null device on DOS, MS-DOS, and Windows, not a valid file. As of Windows 8 redirections to NUL don't result in a file named NUL being written. Try creating a file on your desktop named NUL, with or without a file extension: you will be unable to do so. (There are several other device names in Windows that might be worth getting to know.)
~/.vimrc
Platform-Dependent
Of course, you still don't want to memorize those and type them out each time. It's much easier to map the appropriate command to a simpler user command. To do this on POSIX, you could add the following line to your ~/.vimrc file, creating it if it doesn't already exist:
command W silent execute 'write !sudo tee ' . shellescape(@%, 1) . ' >/dev/null'
This will allow you to type the :W (case-sensitive) command to write the current file with super-user permissions--much easier.
Platform-Independent
I use a platform-independent ~/.vimrc file that synchronizes across computers, so I added multi-platform functionality to mine. Here's a ~/.vimrc with only the relevant settings:
#!vim
" Use za (not a command; the keys) in normal mode to toggle a fold.
" META_COMMENT Modeline Definition: {{{1
" vim: ts=4 sw=4 sr sts=4 fdm=marker ff=unix fenc=utf-8
" ts: Actual tab character stops.
" sw: Indentation commands shift by this much.
" sr: Round existing indentation when using shift commands.
" sts: Virtual tab stops while using tab key.
" fdm: Folds are manually defined in file syntax.
" ff: Line endings should always be <NL> (line feed #09).
" fenc: Should always be UTF-8; #! must be first bytes, so no BOM.
" General Commands: User Ex commands. {{{1
command W call WriteAsSuperUser(@%) " Write file as super-user.
" Helper Functions: Used by user Ex commands. {{{1
function GetNullDevice() " Gets the path to the null device. {{{2
if filewritable('/dev/null')
return '/dev/null'
else
return 'NUL'
endif
endfunction
function WriteAsSuperUser(file) " Write buffer to a:file as the super user (on POSIX, root). {{{2
exec '%write !sudo tee ' . shellescape(a:file, 1) . ' >' . GetNullDevice()
endfunction
" }}}1
" EOF
A: If you're using Vim, there is a script available named sudo.vim. If you find that you've opened a file that you need root access to read, type:e sudo:%Vim replaces the % with the name of the current file, and sudo: instructs the sudo.vim script to take over for reading and writing.
A: A quick Google seems to give this advice:
*
*Don't try to edit if it's read-only.
*You might be able to change the permissions on the file. (Whether or not it will let you save is up to experimentation.)
*If you still edited anyway, save to a temporary file and then move it.
http://ubuntuforums.org/showthread.php?t=782136
A: Here's another one that has appeared since this question was answered, a plugin called SudoEdit which provides SudoRead and SudoWrite functions, which will by default try to use sudo first and su if that fails: http://www.vim.org/scripts/script.php?script_id=2709
A: I have this in my ~/.bashrc:
alias svim='sudo vim'
Now whenever I need to edit a config file I just open it with svim.
A: A quick hack you can consider is doing a chmod on the file you're editing, save with vim, and then chmod back to what the file was originally.
ls -l test.file (to see the permissions of the file)
chmod 777 test.file
[This is where you save in vim]
chmod xxx test.file (restore the permissions you found in the first step)
Of course I don't recommend this approach in a system where you're worried about security, as for a few seconds anyone can read/change the file without you realizing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "253"
} |
Q: How to get the value of built, encoded ViewState? I need to grab the base64-encoded representation of the ViewState. Obviously, this would not be available until fairly late in the request lifecycle, which is OK.
For example, if the output of the page includes:
<input type="hidden" name="__VIEWSTATE"
id="__VIEWSTATE" value="/wEPDwUJODU0Njc5MD...==" />
I need a way on the server-side to get the value "/wEPDwUJODU0Njc5MD...=="
To clarify, I need this value when the page is being rendered, not on PostBack. e.g. I need to know the ViewState value that is being sent to the client, not the ViewState I'm getting back from them.
A: See this blog post where the author describes a method for overriding the default behavior for generating the ViewState and instead shows how to save it on the server Session object.
In ASP.NET 2.0, ViewState is saved by
a descendant of PageStatePersister
class. This class is an abstract class
for saving and loading ViewsState and
there are two implemented descendants
of this class in .Net Framework, named
HiddenFieldPageStatePersister and
SessionPageStatePersister. By default
HiddenFieldPageStatePersister is used
to save/load ViewState information,
but we can easily get the
SessionPageStatePersister to work and
save ViewState in Session object.
Although I did not test his code, it seems to show exactly what you want: a way to gain access to ViewState code while still on the server, before postback.
A: I enabled compression following similar articles to those posted above. The key to accessing the ViewState before the application sends it was overriding this method;
protected override void SavePageStateToPersistenceMedium(object viewState)
You can call the base method within this override and then add whatever additional logic you require to handle the ViewState.
A: Rex, I suspect a good place to start looking is solutions that compress the ViewState -- they're grabbing ViewState on the server before it's sent down to the client and gzipping it. That's exactly where you want to be.
*
*Scott Hanselman on ViewState Compression (2005)
*ViewState Compression with System.IO.Compression (2007)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: How do I fix 'Unprocessed view path found' error with ExceptionNotifier plugin in rails 2.1? After upgrading a rails 1.2 website to 2.1, the ExceptionNotifier plugin no longer works, complaining about this error:
ActionView::TemplateFinder::InvalidViewPath: Unprocessed view path
found:
"/path/to/appname/vendor/plugins/exception_notification/lib/../views".
Set your view paths with #append_view_path, #prepend_view_path, or #view_paths=.
What causes it and how do I fix it?
A: This was caused by a change in rails 2.1 which prevents rails from loading views from any arbitrary path for security reasons.
There is now an updated version of the plugin on github, so the solution is to use that.
The old solution here for posterity
To work around it, edit init.rb under your vendor/plugins/exception_notification directory, and add the following code to the end
ActionController::Base.class_eval do
append_view_path File.dirname(__FILE__) + '/lib/../views'
end
This adds the ExceptionNotifier plugins' views folder to the list, so it is allowed to load them.
A: You ought to upgrade to the newest Exception Notification plugin which is in its new home at GitHub.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How to get the Country according to a certain IP? Does anyone know any simple way to retrieve the country from a given IP Address, preferably in ISO_3166-1 format?
A: ipinfodb provides a free database and API for IP to country and vice versa. They use free data from MaxMind. The data gets updated every month, and it's a great free alternative with decent accuracy.
A: I don't know how accurate http://hostip.info site is. I just visited that site, and it reported that my country is Canada. I'm in the US and the ISP that my office uses only operates from the US. It does allow you to correct its guess, but if you are using this service to track web site visitors by country, you'll have no way of knowing if the data is correct. Of course, I'm just one data point. I downloaded the GeoLite Country database, which is just a .csv file, and my IP address was correctly identified as US.
Another benefit of the MaxMind product line (paid or free) is that you have the data, you don't incur the performance hit of making a web service call to another system.
A: There are two approaches: using an Internet service and using some kind of local list (perhaps wrapped in a library). What you want will depend on what you are building.
For services:
*
*http://www.hostip.info/use.html (as mentioned by Mark)
*http://www.team-cymru.org/Services/ip-to-asn.html
For lists:
*
*http://www.maxmind.com/app/geoip_country (as mentioned by Orion)
*You could roll your own by downloading the lists from the RIRs:
*
*ftp.arin.net/pub/stats/arin/delegated-arin-latest
*ftp.ripe.net/ripe/stats/delegated-ripencc-latest
*ftp.afrinic.net/pub/stats/afrinic/delegated-afrinic-latest
*ftp.apnic.net/pub/stats/apnic/delegated-apnic-latest
*ftp.lacnic.net/pub/stats/lacnic/delegated-lacnic-latest
The format is documented in this README
A: A lot of people (including my company) seem to use MaxMind GeoIP.
They have a free version GeoLite which is not as accurate as the paid version, but if you're just after something simple, it may be good enough.
A: The most accurate is Digital Elements NetAcuity. It's not free but you get what you pay for most of the time.
A: google's clientlocation returns (my example)
latlng = new google.maps.LatLng(google.loader.ClientLocation.latitude, google.loader.ClientLocation.longitude);
location = "IP location: " + getFormattedLocation();
document.getElementById("location").innerHTML = location;
A: You can use the solution provided for this question.
But it returns a 2 digit country code.
A: Try this php code
<?php $ip = $_SERVER['REMOTE_ADDR'];
$json = file_get_contents("http://api.easyjquery.com/ips/?ip=".$ip."&full=true");
$json = json_decode($json,true);
$timezone = $json[localTimeZone];?>
A: You can use my service, http://ipinfo.io, for this. The API returns a whole bunch of different details about an IP address:
$ curl ipinfo.io/8.8.8.8
{
"ip": "8.8.8.8",
"hostname": "google-public-dns-a.google.com",
"loc": "37.385999999999996,-122.0838",
"org": "AS15169 Google Inc.",
"city": "Mountain View",
"region": "CA",
"country": "US",
"phone": 650
}
If you're only after the country code you just need to add /country to the URL:
$ curl ipinfo.io/8.8.8.8/country
US
Here's a generic PHP function you could use:
function ip_details($ip) {
$json = file_get_contents("http://ipinfo.io/{$ip}");
$details = json_decode($json);
return $details;
}
$details = ip_details("8.8.8.8");
echo $details->city; // => Mountain View
echo $details->country; // => US
echo $details->org; // => AS15169 Google Inc.
echo $details->hostname; // => google-public-dns-a.google.com
I've used the IP 8.8.8.8 in these examples, but if you want details for the user's IP just pass in $_SERVER['REMOTE_ADDR'] instead. More details are available at http://ipinfo.io/developers
A: use the function ipToCountry($ip) from http://www.mmtutorialvault.com/php-ip-to-country-function/
A: you can use web service API's which do this work like:
see example of service: http://ip-api.com and usage: http://whatmyip.info
A: Here's a nice free service with a public API:
http://www.hostip.info/use.html
A: See ipdata.co which gives you several data points from an IP address.
The API is pretty fast, with 10 global endpoints each being able to handle >800M calls daily.
Here's a curl example:
curl https://api.ipdata.co/78.8.53.5
{
"ip": "78.8.53.5",
"city": "G\u0142og\u00f3w",
"region": "Lower Silesia",
"region_code": "DS",
"country_name": "Poland",
"country_code": "PL",
"continent_name": "Europe",
"continent_code": "EU",
"latitude": 51.6461,
"longitude": 16.1678,
"asn": "AS12741",
"organisation": "Netia SA",
"postal": "67-200",
"currency": "PLN",
"currency_symbol": "z\u0142",
"calling_code": "48",
"flag": "https://ipdata.co/flags/pl.png",
"emoji_flag": "\ud83c\uddf5\ud83c\uddf1",
"time_zone": "Europe/Warsaw",
"is_eu": true,
"suspicious_factors": {
"is_tor": false
}
}⏎
A: You can try the free IP2Location LITE database
To create the table in MySQL
CREATE DATABASE ip2location;
USE ip2location;
CREATE TABLE `ip2location_db1`(
`ip_from` INT(10) UNSIGNED,
`ip_to` INT(10) UNSIGNED,
`country_code` CHAR(2),
`country_name` VARCHAR(64),
INDEX `idx_ip_to` (`ip_to`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
To import the data
LOAD DATA LOCAL
INFILE 'IP2LOCATION-LITE-DB1.CSV'
INTO TABLE
`ip2location_db1`
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 0 LINES;
PHP code to query the MySQL
<?php
// Replace this MYSQL server variables with actual configuration
$mysql_server = "mysql_server.com";
$mysql_user_name = "UserName";
$mysql_user_pass = "Password";
// Retrieve visitor IP address from server variable REMOTE_ADDR
$ipaddress = $_SERVER["REMOTE_ADDR"];
// Convert IP address to IP number for querying database
$ipno = Dot2LongIP($ipaddress);
// Connect to the database server
$link = mysql_connect($mysql_server, $mysql_user_name, $mysql_user_pass) or die("Could not connect to MySQL database");
// Connect to the IP2Location database
mysql_select_db("ip2location") or die("Could not select database");
// SQL query string to match the recordset that the IP number fall between the valid range
$query = "SELECT * FROM ip2location_db1 WHERE $ipno <= ip_to LIMIT 1";
// Execute SQL query
$result = mysql_query($query) or die("IP2Location Query Failed");
// Retrieve the recordset (only one)
$row = mysql_fetch_object($result);
// Keep the country information into two different variables
$country_code = $row->country_code;
$country_name = $row->country_name;
echo "Country_code: " . $country_code . "<br/>";
echo "Country_name: " . $country_name . "<br />";
// Free recordset and close database connection
mysql_free_result($result);
mysql_close($link);
// Function to convert IP address (xxx.xxx.xxx.xxx) to IP number (0 to 256^4-1)
function Dot2LongIP ($IPaddr) {
if ($IPaddr == "")
{
return 0;
} else {
$ips = explode(".", $IPaddr);
return ($ips[3] + $ips[2] * 256 + $ips[1] * 256 * 256 + $ips[0] * 256 * 256 * 256);
}
}
?>
A: You can give a try to https://astroip.co, it is a new Geolocation API I built which exposes geo data together with other useful datapoints like currency, timezone, ASN data and security.
Here it is an example of the json response:
curl https://api.astroip.co/70.163.7.1
{
"status_code": 200,
"geo": {
"is_metric": false,
"is_eu": false,
"longitude": -77.0924,
"latitude": 38.7591,
"country_geo_id": 6252001,
"zip_code": "22306",
"city": "Alexandria",
"region_code": "VA",
"region_name": "Virginia",
"continent_code": "NA",
"continent_name": "North America",
"capital": "Washington",
"country_name": "United States",
"country_code": "US"
},
"asn": {
"route": "70.160.0.0/14",
"type": "isp",
"domain": "cox.net",
"organization": "ASN-CXA-ALL-CCI-22773-RDC",
"asn": "AS22773"
},
"currency": {
"native_name": "US Dollar",
"code": "USD",
"name": "US Dollar",
"symbol": "$"
},
"timezone": {
"is_dst": false,
"gmt_offset": -18000,
"date_time": "2020-12-05T17:04:48-05:00",
"microsoft_name": "Eastern Standard Time",
"iana_name": "America/New_York"
},
"security": {
"is_crawler": false,
"is_proxy": false,
"is_tor": false,
"tor_insights": null,
"proxy_insights": null,
"crawler_insights": null
},
"error": null,
"ip_type": "ipv4",
"ip": "70.163.7.1"
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61"
} |
Q: Displaying Flash content in a C# WinForms application What is the best way to display Flash content in a C# WinForms application? I would like to create a user control (similar to the current PictureBox) that will be able to display images and flash content.
It would be great to be able to load the flash content from a stream of sorts rather than a file on disk.
A: Sven, you reached the same conclusion as I did: I found the Shockwave Flash Object, all be it from a slightly different route, but was stumped on how to load the files from somewhere other than file on disk/URL. The F-IN-BOX, although just a wrapper of the Shockwave Flash Object seems to provide much more functionality, which may just help me!
Shooting flys with bazookas may be fun, but an embeded web brower is not the path that I am looking for. :)
There was a link on Adobe's site that talked about "Embedding and Communicating with the Macromedia Flash Player in C# Windows Applications" but they seem to have removed it :(
A: While I haven't used a flash object inside a windows form application myself, I do know that it's possible.
In Visual studio on your toolbox, choose to add a new component.
Then in the new window that appears choose the "COM Components" tab to get a list in which you can find the "Shockwave Flash Object"
Once added to the toolbox, simply use the control as you would use any other "standard" control from visual studio.
three simple commands are available to interact with the control:
*
*AxShockwaveFlash1.Stop()
*AxShockwaveFlash1.Movie = FilePath &
"\FileName.swf"
*AxShockwaveFlash1.Play()
which, I think, are all self explanatory.
It would be great to be able to load
the flash content from a stream of
sorts rather than a file on disk.
I just saw you are also looking for a means to load the content from a stream,
and because I'm not really sure that is possible with the shockwave flash object I will give you another option (two actually).
the first is the one I would advise you to use only when necessary, as it uses the full blown "webbrowser component" (also available as an extra toolbox item), which is like trying to shoot a fly with a bazooka.
of course it will work, as the control will act as a real browser window (actually the internet explorer browser), but its not really meant to be used in the way you need it.
the second option is to use something I just discovered while looking for more information about playing flash content inside a windows form. F-IN-BOX is a commercial solution that will also play content from a given website URL. (The link provided will direct you to the .NET code you have to use).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: How do I delete a file which is locked by another process in C#? I'm looking for a way to delete a file which is locked by another process using C#. I suspect the method must be able to find which process is locking the file (perhaps by tracking the handles, although I'm not sure how to do this in C#) then close that process before being able to complete the file delete using File.Delete().
A: If you want to do it programmatically. I'm not sure... and I'd really recommend against it.
If you're just troubleshooting stuff on your own machine, SysInternals Process Explorer can help you
Run it, use the Find Handle command (I think it's either in the find or handle menu), and search for the name of your file. Once the handle(s) is found, you can forcibly close them.
You can then delete the file and so on.
Beware, doing this may cause the program which owns the handles to behave strangely, as you've just pulled the proverbial rug out from under it, but it works well when you are debugging your own errant code, or when visual studio/windows explorer is being crapped and not releasing file handles even though you told them to close the file ages ago... sigh :-)
A: You can use this program, Handle, to find which process has the lock on your file. It's a command-line tool, so I guess you use the output from that. I'm not sure about finding it programmatically.
If deleting the file can wait, you could specify it for deletion when your computer next starts up:
*
*Start REGEDT32 (W2K) or REGEDIT (WXP) and navigate to:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager
*W2K and WXP
*
*W2K:EditAdd Value...Data Type: REG_MULTI_SZValue Name: PendingFileRenameOperationsOK
*WXP:EditNewMulti-String Valueenter
PendingFileRenameOperations
*In the Data area, enter "\??\" + filename to be deleted. LFNs may
be entered without being embedded in quotes. To delete C:\Long Directory Name\Long File Name.exe, enter the following data:
\??\C:\Long Directory Name\Long File Name.exe
Then press OK.
*The "destination file name" is a null (zero) string. It is entered
as follows:
*
*W2K:EditBinaryselect Data Format: Hexclick at the end of the hex stringenter 0000 (four zeros)OK
*WXP:Right-click the valuechoose "Modify Binary Data"click at the end of the hex stringenter 0000 (four zeros)OK
*Close REGEDT32/REGEDIT and reboot to delete the file.
(Shamelessly stolen from some random forum, for posterity's sake.)
A: Using Orion Edwards advice I downloaded the Sysinternals Process Explorer which in turn allowed me to discover that the file I was having difficulties deleting was in fact being held not by the Excel.Applications object I thought, but rather the fact that my C# code send mail code had created an Attachment object that left a handle to this file open.
Once I saw this, I quite simple called on the dispose method of the Attachment object, and the handle was released.
The Sysinternals explorer allowed me to discover this used in conjunction with the Visual Studio 2005 debugger.
I highly recommend this tool!
A: Killing other processes is not a healthy thing to do. If your scenario involves something like uninstallation, you could use the MoveFileEx API function to mark the file for deletion upon next reboot.
If it appears that you really need to delete a file in use by another process, I'd recommend re-considering the actual problem before considering any solutions.
A: Oh, one big hack I employed years ago, is that Windows won't let you delete files, but it does let you move them.
Pseudo-sort-of-code:
mv %WINDIR%\System32\mfc42.dll %WINDIR\System32\mfc42.dll.old
Install new mfc42.dll
Tell user to save work and restart applications
When the applications restarted (note we didn't need to reboot the machine), they loaded the new mfc42.dll, and all was well. That, coupled with PendingFileOperations to delete the old one the next time the whole system restarted, worked pretty well.
A: This looks promising. A way of killing the file handle....
http://www.timstall.com/2009/02/killing-file-handles-but-not-process.html
A: You can use code that you supply the full file path to, and it will return a List<Processes> of anything locking that file:
using System.Runtime.InteropServices;
using System.Diagnostics;
static public class FileUtil
{
[StructLayout(LayoutKind.Sequential)]
struct RM_UNIQUE_PROCESS
{
public int dwProcessId;
public System.Runtime.InteropServices.ComTypes.FILETIME ProcessStartTime;
}
const int RmRebootReasonNone = 0;
const int CCH_RM_MAX_APP_NAME = 255;
const int CCH_RM_MAX_SVC_NAME = 63;
enum RM_APP_TYPE
{
RmUnknownApp = 0,
RmMainWindow = 1,
RmOtherWindow = 2,
RmService = 3,
RmExplorer = 4,
RmConsole = 5,
RmCritical = 1000
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
struct RM_PROCESS_INFO
{
public RM_UNIQUE_PROCESS Process;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = CCH_RM_MAX_APP_NAME + 1)]
public string strAppName;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = CCH_RM_MAX_SVC_NAME + 1)]
public string strServiceShortName;
public RM_APP_TYPE ApplicationType;
public uint AppStatus;
public uint TSSessionId;
[MarshalAs(UnmanagedType.Bool)]
public bool bRestartable;
}
[DllImport("rstrtmgr.dll", CharSet = CharSet.Unicode)]
static extern int RmRegisterResources(uint pSessionHandle,
UInt32 nFiles,
string[] rgsFilenames,
UInt32 nApplications,
[In] RM_UNIQUE_PROCESS[] rgApplications,
UInt32 nServices,
string[] rgsServiceNames);
[DllImport("rstrtmgr.dll", CharSet = CharSet.Auto)]
static extern int RmStartSession(out uint pSessionHandle, int dwSessionFlags, string strSessionKey);
[DllImport("rstrtmgr.dll")]
static extern int RmEndSession(uint pSessionHandle);
[DllImport("rstrtmgr.dll")]
static extern int RmGetList(uint dwSessionHandle,
out uint pnProcInfoNeeded,
ref uint pnProcInfo,
[In, Out] RM_PROCESS_INFO[] rgAffectedApps,
ref uint lpdwRebootReasons);
/// <summary>
/// Find out what process(es) have a lock on the specified file.
/// </summary>
/// <param name="path">Path of the file.</param>
/// <returns>Processes locking the file</returns>
/// <remarks>See also:
/// http://msdn.microsoft.com/en-us/library/windows/desktop/aa373661(v=vs.85).aspx
/// http://wyupdate.googlecode.com/svn-history/r401/trunk/frmFilesInUse.cs (no copyright in code at time of viewing)
///
/// </remarks>
static public List<Process> WhoIsLocking(string path)
{
uint handle;
string key = Guid.NewGuid().ToString();
List<Process> processes = new List<Process>();
int res = RmStartSession(out handle, 0, key);
if (res != 0) throw new Exception("Could not begin restart session. Unable to determine file locker.");
try
{
const int ERROR_MORE_DATA = 234;
uint pnProcInfoNeeded = 0,
pnProcInfo = 0,
lpdwRebootReasons = RmRebootReasonNone;
string[] resources = new string[] { path }; // Just checking on one resource.
res = RmRegisterResources(handle, (uint)resources.Length, resources, 0, null, 0, null);
if (res != 0) throw new Exception("Could not register resource.");
//Note: there's a race condition here -- the first call to RmGetList() returns
// the total number of process. However, when we call RmGetList() again to get
// the actual processes this number may have increased.
res = RmGetList(handle, out pnProcInfoNeeded, ref pnProcInfo, null, ref lpdwRebootReasons);
if (res == ERROR_MORE_DATA)
{
// Create an array to store the process results
RM_PROCESS_INFO[] processInfo = new RM_PROCESS_INFO[pnProcInfoNeeded];
pnProcInfo = pnProcInfoNeeded;
// Get the list
res = RmGetList(handle, out pnProcInfoNeeded, ref pnProcInfo, processInfo, ref lpdwRebootReasons);
if (res == 0)
{
processes = new List<Process>((int)pnProcInfo);
// Enumerate all of the results and add them to the
// list to be returned
for (int i = 0; i < pnProcInfo; i++)
{
try
{
processes.Add(Process.GetProcessById(processInfo[i].Process.dwProcessId));
}
// catch the error -- in case the process is no longer running
catch (ArgumentException) { }
}
}
else throw new Exception("Could not list processes locking resource.");
}
else if (res != 0) throw new Exception("Could not list processes locking resource. Failed to get size of result.");
}
finally
{
RmEndSession(handle);
}
return processes;
}
}
Then, iterate the list of processes and close them and delete the files:
string[] files = Directory.GetFiles(target_dir);
List<Process> lstProcs = new List<Process>();
foreach (string file in files)
{
lstProcs = ProcessHandler.WhoIsLocking(file);
if (lstProcs.Count > 0) // deal with the file lock
{
foreach (Process p in lstProcs)
{
if (p.MachineName == ".")
ProcessHandler.localProcessKill(p.ProcessName);
else
ProcessHandler.remoteProcessKill(p.MachineName, txtUserName.Text, txtPassword.Password, p.ProcessName);
}
File.Delete(file);
}
else
File.Delete(file);
}
And depending on if the file is on the local computer:
public static void localProcessKill(string processName)
{
foreach (Process p in Process.GetProcessesByName(processName))
{
p.Kill();
}
}
or a network computer:
public static void remoteProcessKill(string computerName, string fullUserName, string pword, string processName)
{
var connectoptions = new ConnectionOptions();
connectoptions.Username = fullUserName; // @"YourDomainName\UserName";
connectoptions.Password = pword;
ManagementScope scope = new ManagementScope(@"\\" + computerName + @"\root\cimv2", connectoptions);
// WMI query
var query = new SelectQuery("select * from Win32_process where name = '" + processName + "'");
using (var searcher = new ManagementObjectSearcher(scope, query))
{
foreach (ManagementObject process in searcher.Get())
{
process.InvokeMethod("Terminate", null);
process.Dispose();
}
}
}
References:
How do I find out which process is locking a file using .NET?
Delete a directory where someone has opened a file
A: The typical method is as follows. You've said you want to do this in C# so here goes...
*
*If you don't know which process has the file locked, you'll need to examine each process's handle list, and query each handle to determine if it identifies the locked file. Doing this in C# will likely require P/Invoke or an intermediary C++/CLI to call the native APIs you'll need.
*Once you've figured out which process(es) have the file locked, you'll need to safely inject a small native DLL into the process (you can also inject a managed DLL, but this is messier, as you then have to start or attach to the .NET runtime).
*That bootstrap DLL then closes the handle using CloseHandle, etc.
Essentially: the way to unlock a "locked" file is to inject a DLL file into the offending process's address space and close it yourself. You can do this using native or managed code. No matter what, you're going to need a small amount of native code or at least P/Invoke into the same.
Helpful links:
*
*Three Ways to Inject Your Code into Another Process
*.NET Code Injection
Good luck!
A: Using dotnet core (net6) I solved this problem by using the win32 restart manager (as others have also mentioned). However some of the linked articles have elaborate code importing DLLs and calling those.
After finding an app to kill processes that lock a file written by meziantou. I found out that he publishes .Net wrappers for win32 dlls (including the restart manager).
Leveraging his work, I was able to fix this problem with the following code:
using Meziantou.Framework.Win32;
public static IEnumerable<Process> GetProcessesLockingFile(string filePath)
{
using var session = RestartManager.CreateSession();
session.RegisterFile(filePath);
return session.GetProcessesLockingResources();
}
public static void KillProcessesLockingFile(string filePath)
{
var lockingProcesses = GetProcessesLockingFile(filePath);
foreach (var lockingProcess in lockingProcesses)
{
lockingProcess.Kill();
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
} |
Q: Easy-to-Use Regular Expression Support in C++? I'm looking for a robust, easy-to-use, regular expression evaluator for native C++, either platform-independent or Windows-specific.
Requirements:
*
*Can't use Boost or ACE regex libraries (unfortunately)
*Can't use .NET Regex (or any managed code)
The main requirement is that it should be standalone and open.
A: The GNU C Library supports regular expressions. It's open, and the RE code seems to be easily extractable.
A: I would second the recommendation for PCRE. I have used it in C++ projects in Windows and it works great. It's free, even for building commercial software. It also implements something of a de facto standard regular expression language, which will be welcome to your users. PCRE is of course Perl-compatible, and Python also uses the same library.
The native PCRE interface is a bit awkward and very C-style, so it's probably worth writing a nice C++ wrapper around it. There is very likely already is one out there, but I'm not familiar with any.
A: The GNU C library regular expressions facility (regcomp(), regexec() and friends) is broken. Use libetre instead; the function signatures match the ones provided by glibc.
http://laurikari.net/tre/
A: C++11 and forward now contains the standard regular expression library.
Include the <regex> header, and use.
A: Why don't you use Microsoft ATL's regex library? Kenny Kerr has written a short article on that recently.
ATL includes a lightweight regular expression implementation. Although
originally part of Visual C++, it is now included with the ATL Server
download.
The CAtlRegExp class template implements the parser and matching
engine. ...
The regular expression grammar is defined at the top of the atlrx.h
header file.
A: The free ATL Server Library and Tools from CodePlex includes a regex parser. See AtlServer in the CodePlex Archive
ATL Server is a library of C++ classes that allow developers to build
both client and server parts of service-type C++ applications and web
services. It provides much of the functionality required to build
large scale internet sites, such as SOAP messaging, caching
facilities, threading facilities, regular expression processing,
management of session-state, performance monitoring, MIME support,
integration with IIS and class for interacting with security and
cryptographic infrastructure. The earlier versions of the library are
parts of Visual Studio 2002, Visual Studio 2003 and Visual Studio
2005. The project has started from the version of the library released as part of Visual Studio 2005 SP1.
A: C++11 now includes the support of regular expressions.
It will be platform independent. You just need a recent compiler.
Check the following list to know which one to use.
http://wiki.apache.org/stdcxx/C%2B%2B0xCompilerSupport
Hope it helps
A: try libpcre
If you're stuck on windows they have a windows port which should work. I know e-texteditor uses it, so at least that's proof it works :-)
A: If you use Visual Studio you can use Visual C++ 2008 Feature Pack Release, this implements some of TR1, and includes regular expression parsing. Get it
A: Qt has also a nice Regular Expression implementation QRegExp. It is also platform independent.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Why doesn't SQL Full Text Indexing return results for words containing #? For instance, my query is like the following using SQL Server 2005:
SELECT * FROM Table WHERE FREETEXT(SearchField, 'c#')
I have a full text index defined to use the column SearchField which returns results when using:
SELECT * FROM Table WHERE SearchField LIKE '%c#%'
I believe # is a special letter, so how do I allow FREETEXT to work correctly for the query above?
A: The # char is indexed as punctuation and therefore ignored, so it looks like we'll remove the letter C from our word indexing ignore lists.
Tested it locally after doing that and rebuilding the indexes and I get results!
Looking at using a different word breaker language on the indexed column, so that those special characters aren't ignored.
EDIT: I also found this information:
c# is indexed as c (if c is not in your noise word list, see more on noise word lists later), but C# is indexed as C# (in SQL 2005 and SQL 2000 running on Win2003 regardless if C or c is in your noise word list). It is not only C# that is stored as C#, but any capital letter followed by #. Conversely, c++ ( and any other lower-cased letter followed by a ++) is indexed as c (regardless of whether c is in your noise word list).
A: Quoting a much-replicated help page about Indexing Service query language:
To use specially treated characters such as &, |, ^, #, @, $, (, ), in a query, enclose your query in quotation marks (“).
As far as I know, full text search in MSSQL is also done by the Indexing Service, so this might help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: A little diversion into floating point (im)precision, part 1 Most mathematicians agree that:
eπi + 1 = 0
However, most floating point implementations disagree. How well can we settle this dispute?
I'm keen to hear about different languages and implementations, and various methods to make the result as close to zero as possible. Be creative!
A: Is it possible to settle this dispute?
My first thought is to look to a symbolic language, like Maple. I don't think that counts as floating point though.
In fact, how does one represent i (or j for the engineers) in a conventional programming language?
Perhaps a better example is sin(π) = 0? (Or have I missed the point again?)
A: I agree with Ryan, you would need to move to another number representation system. The solution is outside the realm of floating point math because you need pi to represented as an infinitely long decimal so any limited precision scheme just isn't going to work (at least not without employing some kind of fudge-factor to make up the lost precision).
A: Your question seems a little odd to me, as you seem to be suggesting that the Floating Point math is implemented by the language. That's generally not true, as the FP math is done using a floating point processor in hardware. But software or hardware, floating point will always be inaccurate. That's just how floats work.
If you need better precision you need to use a different number representation. Just like if you're doing integer math on numbers that don't fit in an int or long. Some languages have libraries for that built in (I know java has BigInteger and BigDecimal), but you'd have to explicitly use those libraries instead of native types, and the performance would be (sometimes significantly) worse than if you used floats.
A: @Ryan Fox In fact, how does one represent i (or j for the engineers) in a conventional programming language?
Native complex data types are far from unknown. Fortran had it by the mid-sixties, and the OP exhibits a variety of other languages that support them in hist followup.
And complex numbers can be added to other languages as libraries (with operator overloading they even look just like native types in the code).
But unless you provide a special case for this problem, the "non-agreement" is just an expression of imprecise machine arithmetic, no? It's like complaining that
float r = 2/3;
float s = 3*r;
float t = s - 2;
ends with (t != 0) (At least if you use an dumb enough compiler)...
A: I had looooong coffee chats with my best pal talking about Irrational numbers and the diference between other numbers. Well, both of us agree in this different point of view:
Irrational numbers are relations, as functions, in a way, what way? Well, think about "if you want a perfect circle, give me a perfect pi", but circles are diferent to the other figures (4 sides, 5, 6... 100, 200) but... How many more sides do you have, more like a circle it look like. If you followed me so far, connecting all this ideas here is the pi formula:
So, pi is a function, but one that never ends! because of the ∞ parameter, but I like to think that you can have "instance" of pi, if you change the ∞ parameter for a very big Int, you will have a very big pi instance.
Same with e, give me a huge parameter, I will give you a huge e.
Putting all the ideas together:
As we have memory limitations, the language and libs provide to us huge instance of irrational numbers, in this case, pi and e, as final result, you will have long aproach to get 0, like the examples provided by @Chris Jester-Young
A:
In fact, how does one represent i (or j for the engineers) in a conventional programming language?
In a language that doesn't have a native representation, it is usually added using OOP to create a Complex class to represent i and j, with operator overloading to properly deal with operations involving other Complex numbers and or other number primitives native to the language.
Eg: Complex.java, C++ < complex >
A: Numerical Analysis teaches us that you can't rely on the precise value of small differences between large numbers.
This doesn't just affect the equation in question here, but can bring instability to everything from solving a near-singular set of simultaneous equations, through finding the zeros of polynomials, to evaluating log(~1) or exp(~0) (I have even seen special functions for evaluating log(x+1) and (exp(x)-1) to get round this).
I would encourage you not to think in terms of zeroing the difference -- you can't -- but rather in doing the associated calculations in such a way as to ensure the minimum error.
I'm sorry, it's 43 years since I had this drummed into me at uni, and even if I could remember the references, I'm sure there's better stuff around now. I suggest this as a starting point.
If that sounds a bit patronising, I apologise. My "Numerical Analysis 101" was part of my Chemistry course, as there wasn't much CS in those days. I don't really have a feel for the place/importance numerical analysis has in a modern CS course.
A: It's a limitation of our current floating point computational architectures. Floating point arithmetic is only an approximation of numeric poles like e or pi (or anything beyond the precision your bits allow). I really enjoy these numbers because they defy classification, and appear to have greater entropy(?) than even primes, which are a canonical series. A ratio defy's numerical representation, sometimes simple things like that can blow a person's mind (I love it).
Luckily entire languages and libraries can be dedicated to precision trigonometric functions by using notational concepts (similar to those described by Lasse V. Karlsen ).
Consider a library/language that describes concepts like e and pi in a form that a machine can understand. Does a machine have any notion of what a perfect circle is? Probably not, but we can create an object - circle that satisfies all the known features we attribute to it (constant radius, relationship of radius to circumference is 2*pi*r = C). An object like pi is only described by the aforementioned ratio. r & C can be numeric objects described by whatever precision you want to give them. e can be defined "as the e is the unique real number such that the value of the derivative (slope of the tangent line) of the function f(x) = ex at the point x = 0 is exactly 1" from wikipedia.
Fun question.
A: It's not that most floating point implementations disagree, it's just that they cannot get the accuracy necessary to get a 100% answer. And the correct answer is that they can't.
PI is an infinite series of digits that nobody has been able to denote by anything other than a symbolic representation, and e^X is the same, and thus the only way to get to 100% accuracy is to go symbolic.
A: Here's a short list of implementations and languages I've tried. It's sorted by closeness to zero:
*
*Scheme: (+ 1 (make-polar 1 (atan 0 -1)))
*
*⇒ 0.0+1.2246063538223773e-16i (Chez Scheme, MIT Scheme)
*⇒ 0.0+1.22460635382238e-16i (Guile)
*⇒ 0.0+1.22464679914735e-16i (Chicken with numbers egg)
*⇒ 0.0+1.2246467991473532e-16i (MzScheme, SISC, Gauche, Gambit)
*⇒ 0.0+1.2246467991473533e-16i (SCM)
*Common Lisp: (1+ (exp (complex 0 pi)))
*
*⇒ #C(0.0L0 -5.0165576136843360246L-20) (CLISP)
*⇒ #C(0.0d0 1.2246063538223773d-16) (CMUCL)
*⇒ #C(0.0d0 1.2246467991473532d-16) (SBCL)
*Perl: use Math::Complex; Math::Complex->emake(1, pi) + 1
*
*⇒ 1.22464679914735e-16i
*Python: from cmath import exp, pi; exp(complex(0, pi)) + 1
*
*⇒ 1.2246467991473532e-16j (CPython)
*Ruby: require 'complex'; Complex::polar(1, Math::PI) + 1
*
*⇒ Complex(0.0, 1.22464679914735e-16) (MRI)
*⇒ Complex(0.0, 1.2246467991473532e-16) (JRuby)
*R: complex(argument = pi) + 1
*
*⇒ 0+1.224606353822377e-16i
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Displaying 100 Floating Cubes Using DirectX OR OpenGL I'd like to display 100 floating cubes using DirectX or OpenGL.
I'm looking for either some sample source code, or a description of the technique. I have trouble getting more one cube to display correctly.
I've combed the net for a good series of tutorials and although they talk about how to do 3D primitives, what I can't find is information on how to do large numbers of 3D primitives - cubes, spheres, pyramids, and so forth.
A: You say you have enough trouble getting one cube to display... so I am not sure if you have got one to display or not.
Basically... put your code for writing a cube in one function, then just call that function 100 times.
void DrawCube()
{
//code to draw the cube
}
void DisplayCubes()
{
for(int i = 0; i < 10; ++i)
{
for(int j = 0; j < 10; ++j)
{
glPushMatrix();
//alter these values depending on the size of your cubes.
//This call makes sure that your cubes aren't drawn overtop of each other
glTranslatef(i*5.0, j*5.0, 0);
DrawCube();
glPopMatrix();
}
}
}
That is the basic outline for how you could go about doing this. If you want something more efficient take a look into Display Lists sometime once you have the basics figured out :)
A: Just use glTranslatef (or the DirectX equivalent) to draw a cube using the same code, but moving the relative point where you draw it. Maybe there's a better way to do it though, I'm fairly new to OpenGL. Be sure to set your viewpoint so you can see them all.
A: Yeah, if you were being efficient you'd throw everything into the same vertex buffer, but I don't think drawing 100 cubes will push any GPU produced in the past 5 years, so you should be fine following the suggestions above.
Write a basic pass through vertex shader, shade however you desire in the pixel shader. Either pass in a world matrix and do the translation in the vertex shader, or just compute the world space vertex positions on the CPU side (do this if your cubes are going to stay fixed).
You could get fancy and do geometry instancing etc, but just get the basics going first.
A: This answer isn't just for OP's question. It also answers a more general question - displaying many cubes in general.
Drawing many cube meshes
This is probably the most naive way of doing things. We draw the same cube mesh with many different transformation matrices:
prepare();
for (int i = 0; i < numCubes; i++) {
setTransformation(matrices[i]);
drawCube();
}
/* and so on... */
The nice thing is that this is SUPER easy to implement, and it's not too slow (at least for 100 cubes). I'd recommend this as a starter.
The problem
Ok, but let's say you want to make a Minecraft clone, or at least some sort of project that requires thousands, if not tens of thousands of cubes to be rendered. That's where the performance starts to go down. The problem is that each drawCube() sends a draw call to the GPU, and the time in each draw call adds up, so that eventually, it's unbearable.
However, we can fix this. The solution is batching, a way to do only one draw call for all of the cubes.
Batching
We join all the (transformed) cubes into one single mesh. This means that we will have to deal with only one draw call, instead of thousands. Here is some pseudocode for doing so:
vector<float> transformedVerts;
for (int i = 0; i < numCubes; i++) {
cubeData = cubes[i];
for (int j = 0; j < numVertsPerCube; j++) {
vert = verts[j];
/* We transform the position by the transformation matrix. */
vec3 vposition = matrices[i] * verts.position;
transformedVerts.push(vposition);
/* We don't need to transform the colors, so we just directly push them. */
transformedVerts.push(vert.color);
}
}
...
sendDataToBuffer(transformedVerts);
If the cubes are moving, or one of the cubes is added or deleted, you'll have to recalculate transformedVerts and then resend it to the buffer - but this is minor.
Then at the end we draw the entire lumped-together mesh in one draw call, instead of many.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Heap corruption under Win32; how to locate? I'm working on a multithreaded C++ application that is corrupting the heap. The usual tools to locate this corruption seem to be inapplicable. Old builds (18 months old) of the source code exhibit the same behaviour as the most recent release, so this has been around for a long time and just wasn't noticed; on the downside, source deltas can't be used to identify when the bug was introduced - there are a lot of code changes in the repository.
The prompt for crashing behaviuor is to generate throughput in this system - socket transfer of data which is munged into an internal representation. I have a set of test data that will periodically cause the app to exception (various places, various causes - including heap alloc failing, thus: heap corruption).
The behaviour seems related to CPU power or memory bandwidth; the more of each the machine has, the easier it is to crash. Disabling a hyper-threading core or a dual-core core reduces the rate of (but does not eliminate) corruption. This suggests a timing related issue.
Now here's the rub:
When it's run under a lightweight debug environment (say Visual Studio 98 / AKA MSVC6) the heap corruption is reasonably easy to reproduce - ten or fifteen minutes pass before something fails horrendously and exceptions, like an alloc; when running under a sophisticated debug environment (Rational Purify, VS2008/MSVC9 or even Microsoft Application Verifier) the system becomes memory-speed bound and doesn't crash (Memory-bound: CPU is not getting above 50%, disk light is not on, the program's going as fast it can, box consuming 1.3G of 2G of RAM). So, I've got a choice between being able to reproduce the problem (but not identify the cause) or being able to idenify the cause or a problem I can't reproduce.
My current best guesses as to where to next is:
*
*Get an insanely grunty box (to replace the current dev box: 2Gb RAM in an E6550 Core2 Duo); this will make it possible to repro the crash causing mis-behaviour when running under a powerful debug environment; or
*Rewrite operators new and delete to use VirtualAlloc and VirtualProtect to mark memory as read-only as soon as it's done with. Run under MSVC6 and have the OS catch the bad-guy who's writing to freed memory. Yes, this is a sign of desperation: who the hell rewrites new and delete?! I wonder if this is going to make it as slow as under Purify et al.
And, no: Shipping with Purify instrumentation built in is not an option.
A colleague just walked past and asked "Stack Overflow? Are we getting stack overflows now?!?"
And now, the question: How do I locate the heap corruptor?
Update: balancing new[] and delete[] seems to have gotten a long way towards solving the problem. Instead of 15mins, the app now goes about two hours before crashing. Not there yet. Any further suggestions? The heap corruption persists.
Update: a release build under Visual Studio 2008 seems dramatically better; current suspicion rests on the STL implementation that ships with VS98.
*Reproduce the problem. Dr Watson will produce a dump that might be helpful in further analysis.
I'll take a note of that, but I'm concerned that Dr Watson will only be tripped up after the fact, not when the heap is getting stomped on.
Another try might be using WinDebug as a debugging tool which is quite powerful being at the same time also lightweight.
Got that going at the moment, again: not much help until something goes wrong. I want to catch the vandal in the act.
Maybe these tools will allow you at least to narrow the problem to certain component.
I don't hold much hope, but desperate times call for...
And are you sure that all the components of the project have correct runtime library settings (C/C++ tab, Code Generation category in VS 6.0 project settings)?
No I'm not, and I'll spend a couple of hours tomorrow going through the workspace (58 projects in it) and checking they're all compiling and linking with the appropriate flags.
Update: This took 30 seconds. Select all projects in the Settings dialog, unselect until you find the project(s) that don't have the right settings (they all had the right settings).
A: We've had pretty good luck by writing our own malloc and free functions. In production, they just call the standard malloc and free, but in debug, they can do whatever you want. We also have a simple base class that does nothing but override the new and delete operators to use these functions, then any class you write can simply inherit from that class. If you have a ton of code, it may be a big job to replace calls to malloc and free to the new malloc and free (don't forget realloc!), but in the long run it's very helpful.
In Steve Maguire's book Writing Solid Code (highly recommended), there are examples of debug stuff that you can do in these routines, like:
*
*Keep track of allocations to find leaks
*Allocate more memory than necessary and put markers at the beginning and end of memory -- during the free routine, you can ensure these markers are still there
*memset the memory with a marker on allocation (to find usage of uninitialized memory) and on free (to find usage of free'd memory)
Another good idea is to never use things like strcpy, strcat, or sprintf -- always use strncpy, strncat, and snprintf. We've written our own versions of these as well, to make sure we don't write off the end of a buffer, and these have caught lots of problems too.
A: Run the original application with ADplus -crash -pn appnename.exe
When the memory issue pops-up you will get a nice big dump.
You can analyze the dump to figure what memory location was corrupted.
If you are lucky the overwrite memory is a unique string you can figure out where it came from. If you are not lucky, you will need to dig into win32 heap and figure what was the orignal memory characteristics. (heap -x might help)
After you know what was messed-up, you can narrow appverifier usage with special heap settings. i.e. you can specify what DLL you monitor, or what allocation size to monitor.
Hopefully this will speedup the monitoring enough to catch the culprit.
In my experience, I never needed full heap verifier mode, but I spent a lot of time analyzing the crash dump(s) and browsing sources.
P.S:
You can use DebugDiag to analyze the dumps.
It can point out the DLL owning the corrupted heap, and give you other usefull details.
A: You should attack this problem with both runtime and static analysis.
For static analysis consider compiling with PREfast (cl.exe /analyze). It detects mismatched delete and delete[], buffer overruns and a host of other problems. Be prepared, though, to wade through many kilobytes of L6 warning, especially if your project still has L4 not fixed.
PREfast is available with Visual Studio Team System and, apparently, as part of Windows SDK.
A: Is this in low memory conditions? If so it might be that new is returning NULL rather than throwing std::bad_alloc. Older VC++ compilers didn't properly implement this. There is an article about Legacy memory allocation failures crashing STL apps built with VC6.
A: My first choice would be a dedicated heap tool such as pageheap.exe.
Rewriting new and delete might be useful, but that doesn't catch the allocs committed by lower-level code. If this is what you want, better to Detour the low-level alloc APIs using Microsoft Detours.
Also sanity checks such as: verify your run-time libraries match (release vs. debug, multi-threaded vs. single-threaded, dll vs. static lib), look for bad deletes (eg, delete where delete [] should have been used), make sure you're not mixing and matching your allocs.
Also try selectively turning off threads and see when/if the problem goes away.
What does the call stack etc look like at the time of the first exception?
A: The apparent randomness of the memory corruption sounds very much like a thread synchronization issue - a bug is reproduced depending on machine speed. If objects (chuncks of memory) are shared among threads and synchronization (critical section, mutex, semaphore, other) primitives are not on per-class (per-object, per-class) basis, then it is possible to come to a situation where class (chunk of memory) is deleted / freed while in use, or used after deleted / freed.
As a test for that, you could add synchronization primitives to each class and method. This will make your code slower because many objects will have to wait for each other, but if this eliminates the heap corruption, your heap-corruption problem will become a code optimization one.
A: I have same problems in my work (we also use VC6 sometimes). And there is no easy solution for it. I have only some hints:
*
*Try with automatic crash dumps on production machine (see Process Dumper). My experience says Dr. Watson is not perfect for dumping.
*Remove all catch(...) from your code. They often hide serious memory exceptions.
*Check Advanced Windows Debugging - there are lots of great tips for problems like yours. I recomend this with all my heart.
*If you use STL try STLPort and checked builds. Invalid iterator are hell.
Good luck. Problems like yours take us months to solve. Be ready for this...
A: You tried old builds, but is there a reason you can't keep going further back in the repository history and seeing exactly when the bug was introduced?
Otherwise, I would suggest adding simple logging of some kind to help track down the problem, though I am at a loss of what specifically you might want to log.
If you can find out what exactly CAN cause this problem, via google and documentation of the exceptions you are getting, maybe that will give further insight on what to look for in the code.
A: My first action would be as follows:
*
*Build the binaries in "Release" version but creating debug info file (you will find this possibility in project settings).
*Use Dr Watson as a defualt debugger (DrWtsn32 -I) on a machine on which you want to reproduce the problem.
*Repdroduce the problem. Dr Watson will produce a dump that might be helpful in further analysis.
Another try might be using WinDebug as a debugging tool which is quite powerful being at the same time also lightweight.
Maybe these tools will allow you at least to narrow the problem to certain component.
And are you sure that all the components of the project have correct runtime library settings (C/C++ tab, Code Generation category in VS 6.0 project settings)?
A: So from the limited information you have, this can be a combination of one or more things:
*
*Bad heap usage, i.e., double frees, read after free, write after free, setting the HEAP_NO_SERIALIZE flag with allocs and frees from multiple threads on the same heap
*Out of memory
*Bad code (i.e., buffer overflows, buffer underflows, etc.)
*"Timing" issues
If it's at all the first two but not the last, you should have caught it by now with either pageheap.exe.
Which most likely means it is due to how the code is accessing shared memory. Unfortunately, tracking that down is going to be rather painful. Unsynchronized access to shared memory often manifests as weird "timing" issues. Things like not using acquire/release semantics for synchronizing access to shared memory with a flag, not using locks appropriately, etc.
At the very least, it would help to be able to track allocations somehow, as was suggested earlier. At least then you can view what actually happened up until the heap corruption and attempt to diagnose from that.
Also, if you can easily redirect allocations to multiple heaps, you might want to try that to see if that either fixes the problem or results in more reproduceable buggy behavior.
When you were testing with VS2008, did you run with HeapVerifier with Conserve Memory set to Yes? That might reduce the performance impact of the heap allocator. (Plus, you have to run with it Debug->Start with Application Verifier, but you may already know that.)
You can also try debugging with Windbg and various uses of the !heap command.
MSN
A: Graeme's suggestion of custom malloc/free is a good idea. See if you can characterize some pattern about the corruption to give you a handle to leverage.
For example, if it is always in a block of the same size (say 64 bytes) then change your malloc/free pair to always allocate 64 byte chunks in their own page. When you free a 64 byte chunk then set the memory protection bits on that page to prevent reads and wites (using VirtualQuery). Then anyone attempting to access this memory will generate an exception rather than corrupting the heap.
This does assume that the number of outstanding 64 byte chunks is only moderate or you have a lot of memory to burn in the box!
A: If you choose to rewrite new/delete, I have done this and have simple source code at:
http://gandolf.homelinux.org/~smhanov/blog/?id=10
This catches memory leaks and also inserts guard data before and after the memory block to capture heap corruption. You can just integrate with it by putting #include "debug.h" at the top of every CPP file, and defining DEBUG and DEBUG_MEM.
A: The little time I had to solve a similar problem.
If the problem still exists I suggest you do this :
Monitor all calls to new/delete and malloc/calloc/realloc/free.
I make single DLL exporting a function for register all calls. This function receive parameter for identifying your code source, pointer to allocated area and type of call saving this information in a table.
All allocated/freed pair is eliminated. At the end or after you need you make a call to an other function for create report for left data.
With this you can identify wrong calls (new/free or malloc/delete) or missing.
If have any case of buffer overwritten in your code the information saved can be wrong but each test may detect/discover/include a solution of failure identified. Many runs to help identify the errors.
Good luck.
A: Do you think this is a race condition? Are multiple threads sharing one heap? Can you give each thread a private heap with HeapCreate, then they can run fast with HEAP_NO_SERIALIZE. Otherwise, a heap should be thread safe, if you're using the multi-threaded version of the system libraries.
A: A couple of suggestions. You mention the copious warnings at W4 - I would suggest taking the time to fix your code to compile cleanly at warning level 4 - this will go a long way to preventing subtle hard to find bugs.
Second - for the /analyze switch - it does indeed generate copious warnings. To use this switch in my own project, what I did was to create a new header file that used #pragma warning to turn off all the additional warnings generated by /analyze. Then further down in the file, I turn on only those warnings I care about. Then use the /FI compiler switch to force this header file to be included first in all your compilation units. This should allow you to use the /analyze switch while controling the output
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64"
} |
Q: Process size on UNIX What is the correct way to get the process size on Solaris, HP-UX and AIX? Should we use top or ps -o vsz or something else?
A: Yes, you are right to lock at the VSZ.
ps u will give you the VSZ and RSS, which are the virtual memory size and resident set size. The RSS is how much physical memory has been allocated to the process, and the VSZ is the virtual memory size of the process. If you have several copies of a program running, a lot of the memory in the VSZ will be shared between those processes.
A: On Solaris, you can get detailed information on a process's memory usage with the pmap command. In particular, pmap -x <pid> shows you how much of a process's memory is shared and how much is specifically used by that process. This is useful for working out the "marginal" memory usage of a process -- with this technique you can avoid double-counting shared libraries.
A: I summed up the resident set size for all processes like this (as root):
ps ax -o rss | awk '{rss += $1;} END { print rss}'
A: The exact definitions of vsize, rss, rprvt, rshrd, and other obscure-looking abbreviations vary from OS to OS. The manual pages for the top and ps commands will have some sort of description, but all such descriptions are simplified greatly (or are based on long-extinct kernel implementations).
"Process size" as a concept is fiendishly difficult to pin down in the general case. Answers in specific instances depend heavily on the actual memory management implementation in the OS, and are rarely as satisfying as the tidy "process size" concept that exists in the minds of most users (and most developers).
For example, none of those numbers (nor, likely, any combination of them) can be used to tell you exactly how many such processes can run at once in a given amount of free memory. But really, your best bet is to come at it from that end: why do you want this number, and what will you use it for? Given that information, I think you'll get more useful answers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Object Oriented Bayesian Spam Filtering? I was wondering if there is any good and clean object-oriented programming (OOP) implementation of Bayesian filtering for spam and text classification? This is just for learning purposes.
A: Maybe https://ci-bayes.dev.java.net/ or http://www.cs.cmu.edu/~javabayes/Home/node2.html?
I never played with it either.
A: Check out Chapter 6 of Programming Collective Intelligence
A: Here is an implementation of Bayesian filtering in C#: A Naive Bayesian Spam Filter for C# (hosted on CodeProject).
A: nBayes - another C# implementation hosted on CodePlex
A: I definitely recommend Weka which is an Open Source Data Mining Software written in Java:
Weka is a collection of machine learning algorithms for data mining tasks. The algorithms can either be applied directly to a dataset or called from your own Java code. Weka contains tools for data pre-processing, classification, regression, clustering, association rules, and visualization. It is also well-suited for developing new machine learning schemes.
As mentioned above, it ships with a bunch of different classifiers like SVM, Winnow, C4.5, Naive Bayes (of course) and many more (see the API doc).
Note that a lot of classifiers are known to have much better perfomance than Naive Bayes in the field of spam detection or text classification.
Furthermore Weka brings you a very powerful GUI…
A: In French, but you should be able to find the download link :)
PHP Naive Bayesian Filter
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Ensuring that Exceptions are always caught Exceptions in C++ don't need to be caught (no compile time errors) by the calling function. So it's up to developer's judgment whether to catch them using try/catch (unlike in Java).
Is there a way one can ensure that the exceptions thrown are always caught using try/catch by the calling function?
A: Outside the scope of your question so I debated not posting this but in Java there are actually 2 types of exceptions, checked and unchecked. The basic difference is that, much like in c[++], you dont have to catch an unchecked exception.
For a good reference try this
A: No.
See A Pragmatic Look at Exception Specifications for reasons why not.
The only way you can "help" this is to document the exceptions your function can throw, say as a comment in the header file declaring it. This is not enforced by the compiler or anything. Use code reviews for that purpose.
A: Chris' probably has the best pure answer to the question:
However, I'm curious about the root of the question. If the user should always wrap the call in a try/catch block, should the user-called function really be throwing exceptions in the first place?
This is a difficult question to answer without more context regarding the code-base in question. Shooting from the hip, I think the best answer here is to wrap the function up such that the recommended (if not only, depending on the overall exception style of the code) public interface does the try/catch for the user. If you're just trying to ensure that there are no unhandled exceptions in your code, unit tests and code review are probably the best solution.
A: You shouldn't be using an exception here. This obviously isn't an exceptional case if you need to be expecting it everywhere you use this function!
A better solution would be to get the function to return an instance of something like this. In debug builds (assuming developers exercise code paths they've just written), they'll get an assert if they forget to check whether the operation succeded or not.
class SearchResult
{
private:
ResultType result_;
bool succeeded_;
bool succeessChecked_;
public:
SearchResult(Result& result, bool succeeded)
: result_(result)
, succeeded_(succeeded)
, successChecked_(false)
{
}
~SearchResult()
{
ASSERT(successChecked_);
}
ResultType& Result() { return result_; }
bool Succeeded() { successChecked_ = true; return succeeded_; }
}
A: There was once an attempt to add dynamic exception specifications to a function's signature, but since the language could not enforce their accuracy, they were later depreciated.
In C++11 and forward, we now have the noexcept specifier.
Again, if the signature is marked to throw, there is still not requriement that it be handled by the caller.
Depending on the context, you can ensure that exceptional behaviour be handled by coding it into the type system.
See: std::optional as part of the library fundamentals.
A:
Is there a way one can ensure that the
exceptions thrown are always caught
using try/catch by the calling
function?
I find it rather funny, that the Java crowd - including myself - is trying to avoid checked Exceptions. They are trying to work their way around being forced to catch Exceptions by using RuntimeExceptions.
A: Or you could start throwing critical exceptions. Surely, an access violation exception will catch your users' attention.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: How does database indexing work? Given that indexing is so important as your data set increases in size, can someone explain how indexing works at a database-agnostic level?
For information on queries to index a field, check out How do I index a database column.
A: Classic example "Index in Books"
Consider a "Book" of 1000 pages, divided by 10 Chapters, each section with 100 pages.
Simple, huh?
Now, imagine you want to find a particular Chapter that contains a word "Alchemist". Without an index page, you have no other option than scanning through the entire book/Chapters. i.e: 1000 pages.
This analogy is known as "Full Table Scan" in database world.
But with an index page, you know where to go! And more, to lookup any particular Chapter that matters, you just need to look over the index page, again and again, every time. After finding the matching index you can efficiently jump to that chapter by skipping the rest.
But then, in addition to actual 1000 pages, you will need another ~10 pages to show the indices, so totally 1010 pages.
Thus, the index is a separate section that stores values of indexed
column + pointer to the indexed row in a sorted order for efficient
look-ups.
Things are simple in schools, isn't it? :P
A: Just think of Database Index as Index of a book.
If you have a book about dogs and you want to find an information about let's say, German Shepherds, you could of course flip through all the pages of the book and find what you are looking for - but this of course is time consuming and not very fast.
Another option is that, you could just go to the Index section of the book and then find what you are looking for by using the Name of the entity you are looking ( in this instance, German Shepherds) and also looking at the page number to quickly find what you are looking for.
In Database, the page number is referred to as a pointer which directs the database to the address on the disk where entity is located. Using the same German Shepherd analogy, we could have something like this (“German Shepherd”, 0x77129) where 0x77129 is the address on the disk where the row data for German Shepherd is stored.
In short, an index is a data structure that stores the values for a specific column in a table so as to speed up query search.
A: Why is it needed?
When data is stored on disk-based storage devices, it is stored as blocks of data. These blocks are accessed in their entirety, making them the atomic disk access operation. Disk blocks are structured in much the same way as linked lists; both contain a section for data, a pointer to the location of the next node (or block), and both need not be stored contiguously.
Due to the fact that a number of records can only be sorted on one field, we can state that searching on a field that isn’t sorted requires a Linear Search which requires (N+1)/2 block accesses (on average), where N is the number of blocks that the table spans. If that field is a non-key field (i.e. doesn’t contain unique entries) then the entire tablespace must be searched at N block accesses.
Whereas with a sorted field, a Binary Search may be used, which has log2 N block accesses. Also since the data is sorted given a non-key field, the rest of the table doesn’t need to be searched for duplicate values, once a higher value is found. Thus the performance increase is substantial.
What is indexing?
Indexing is a way of sorting a number of records on multiple fields. Creating an index on a field in a table creates another data structure which holds the field value, and a pointer to the record it relates to. This index structure is then sorted, allowing Binary Searches to be performed on it.
The downside to indexing is that these indices require additional space on the disk since the indices are stored together in a table using the MyISAM engine, this file can quickly reach the size limits of the underlying file system if many fields within the same table are indexed.
How does it work?
Firstly, let’s outline a sample database table schema;
Field name Data type Size on disk
id (Primary key) Unsigned INT 4 bytes
firstName Char(50) 50 bytes
lastName Char(50) 50 bytes
emailAddress Char(100) 100 bytes
Note: char was used in place of varchar to allow for an accurate size on disk value.
This sample database contains five million rows and is unindexed. The performance of several queries will now be analyzed. These are a query using the id (a sorted key field) and one using the firstName (a non-key unsorted field).
Example 1 - sorted vs unsorted fields
Given our sample database of r = 5,000,000 records of a fixed size giving a record length of R = 204 bytes and they are stored in a table using the MyISAM engine which is using the default block size B = 1,024 bytes. The blocking factor of the table would be bfr = (B/R) = 1024/204 = 5 records per disk block. The total number of blocks required to hold the table is N = (r/bfr) = 5000000/5 = 1,000,000 blocks.
A linear search on the id field would require an average of N/2 = 500,000 block accesses to find a value, given that the id field is a key field. But since the id field is also sorted, a binary search can be conducted requiring an average of log2 1000000 = 19.93 = 20 block accesses. Instantly we can see this is a drastic improvement.
Now the firstName field is neither sorted nor a key field, so a binary search is impossible, nor are the values unique, and thus the table will require searching to the end for an exact N = 1,000,000 block accesses. It is this situation that indexing aims to correct.
Given that an index record contains only the indexed field and a pointer to the original record, it stands to reason that it will be smaller than the multi-field record that it points to. So the index itself requires fewer disk blocks than the original table, which therefore requires fewer block accesses to iterate through. The schema for an index on the firstName field is outlined below;
Field name Data type Size on disk
firstName Char(50) 50 bytes
(record pointer) Special 4 bytes
Note: Pointers in MySQL are 2, 3, 4 or 5 bytes in length depending on the size of the table.
Example 2 - indexing
Given our sample database of r = 5,000,000 records with an index record length of R = 54 bytes and using the default block size B = 1,024 bytes. The blocking factor of the index would be bfr = (B/R) = 1024/54 = 18 records per disk block. The total number of blocks required to hold the index is N = (r/bfr) = 5000000/18 = 277,778 blocks.
Now a search using the firstName field can utilize the index to increase performance. This allows for a binary search of the index with an average of log2 277778 = 18.08 = 19 block accesses. To find the address of the actual record, which requires a further block access to read, bringing the total to 19 + 1 = 20 block accesses, a far cry from the 1,000,000 block accesses required to find a firstName match in the non-indexed table.
When should it be used?
Given that creating an index requires additional disk space (277,778 blocks extra from the above example, a ~28% increase), and that too many indices can cause issues arising from the file systems size limits, careful thought must be used to select the correct fields to index.
Since indices are only used to speed up the searching for a matching field within the records, it stands to reason that indexing fields used only for output would be simply a waste of disk space and processing time when doing an insert or delete operation, and thus should be avoided. Also given the nature of a binary search, the cardinality or uniqueness of the data is important. Indexing on a field with a cardinality of 2 would split the data in half, whereas a cardinality of 1,000 would return approximately 1,000 records. With such a low cardinality the effectiveness is reduced to a linear sort, and the query optimizer will avoid using the index if the cardinality is less than 30% of the record number, effectively making the index a waste of space.
A: An index is just a data structure that makes the searching faster for a specific column in a database. This structure is usually a b-tree or a hash table but it can be any other logic structure.
A: The first time I read this it was very helpful to me. Thank you.
Since then I gained some insight about the downside of creating indexes:
if you write into a table (UPDATE or INSERT) with one index, you have actually two writing operations in the file system. One for the table data and another one for the index data (and the resorting of it (and - if clustered - the resorting of the table data)). If table and index are located on the same hard disk this costs more time. Thus a table without an index (a heap) , would allow for quicker write operations. (if you had two indexes you would end up with three write operations, and so on)
However, defining two different locations on two different hard disks for index data and table data can decrease/eliminate the problem of increased cost of time. This requires definition of additional file groups with according files on the desired hard disks and definition of table/index location as desired.
Another problem with indexes is their fragmentation over time as data is inserted. REORGANIZE helps, you must write routines to have it done.
In certain scenarios a heap is more helpful than a table with indexes,
e.g:- If you have lots of rivalling writes but only one nightly read outside business hours for reporting.
Also, a differentiation between clustered and non-clustered indexes is rather important.
Helped me:- What do Clustered and Non clustered index actually mean?
A: Now, let’s say that we want to run a query to find all the details of any employees who are named ‘Abc’?
SELECT * FROM Employee
WHERE Employee_Name = 'Abc'
What would happen without an index?
Database software would literally have to look at every single row in the Employee table to see if the Employee_Name for that row is ‘Abc’. And, because we want every row with the name ‘Abc’ inside it, we can not just stop looking once we find just one row with the name ‘Abc’, because there could be other rows with the name Abc. So, every row up until the last row must be searched – which means thousands of rows in this scenario will have to be examined by the database to find the rows with the name ‘Abc’. This is what is called a full table scan
How a database index can help performance
The whole point of having an index is to speed up search queries by essentially cutting down the number of records/rows in a table that need to be examined. An index is a data structure (most commonly a B- tree) that stores the values for a specific column in a table.
How does B-trees index work?
The reason B- trees are the most popular data structure for indexes is due to the fact that they are time efficient – because look-ups, deletions, and insertions can all be done in logarithmic time. And, another major reason B- trees are more commonly used is because the data that is stored inside the B- tree can be sorted. The RDBMS typically determines which data structure is actually used for an index. But, in some scenarios with certain RDBMS’s, you can actually specify which data structure you want your database to use when you create the index itself.
How does a hash table index work?
The reason hash indexes are used is because hash tables are extremely efficient when it comes to just looking up values. So, queries that compare for equality to a string can retrieve values very fast if they use a hash index.
For instance, the query we discussed earlier could benefit from a hash index created on the Employee_Name column. The way a hash index would work is that the column value will be the key into the hash table and the actual value mapped to that key would just be a pointer to the row data in the table. Since a hash table is basically an associative array, a typical entry would look something like “Abc => 0x28939″, where 0x28939 is a reference to the table row where Abc is stored in memory. Looking up a value like “Abc” in a hash table index and getting back a reference to the row in memory is obviously a lot faster than scanning the table to find all the rows with a value of “Abc” in the Employee_Name column.
The disadvantages of a hash index
Hash tables are not sorted data structures, and there are many types of queries which hash indexes can not even help with. For instance, suppose you want to find out all of the employees who are less than 40 years old. How could you do that with a hash table index? Well, it’s not possible because a hash table is only good for looking up key value pairs – which means queries that check for equality
What exactly is inside a database index?
So, now you know that a database index is created on a column in a table, and that the index stores the values in that specific column. But, it is important to understand that a database index does not store the values in the other columns of the same table. For example, if we create an index on the Employee_Name column, this means that the Employee_Age and Employee_Address column values are not also stored in the index. If we did just store all the other columns in the index, then it would be just like creating another copy of the entire table – which would take up way too much space and would be very inefficient.
How does a database know when to use an index?
When a query like “SELECT * FROM Employee WHERE Employee_Name = ‘Abc’ ” is run, the database will check to see if there is an index on the column(s) being queried. Assuming the Employee_Name column does have an index created on it, the database will have to decide whether it actually makes sense to use the index to find the values being searched – because there are some scenarios where it is actually less efficient to use the database index, and more efficient just to scan the entire table.
What is the cost of having a database index?
It takes up space – and the larger your table, the larger your index. Another performance hit with indexes is the fact that whenever you add, delete, or update rows in the corresponding table, the same operations will have to be done to your index. Remember that an index needs to contain the same up to the minute data as whatever is in the table column(s) that the index covers.
As a general rule, an index should only be created on a table if the data in the indexed column will be queried frequently.
See also
*
*What columns generally make good indexes?
*How do database indexes work
A: Simple Description!
The index is nothing but a data structure that stores the values for a specific column in a table. An index is created on a column of a table.
Example: We have a database table called User with three columns – Name, Age and Address. Assume that the User table has thousands of rows.
Now, let’s say that we want to run a query to find all the details of any users who are named 'John'.
If we run the following query:
SELECT * FROM User
WHERE Name = 'John'
The database software would literally have to look at every single row in the User table to see if the Name for that row is ‘John’. This will take a long time.
This is where index helps us: index is used to speed up search queries by essentially cutting down the number of records/rows in a table that needs to be examined.
How to create an index:
CREATE INDEX name_index
ON User (Name)
An index consists of column values(Eg: John) from one table, and those values are stored in a data structure.
So now the database will use the index to find employees named John
because the index will presumably be sorted alphabetically by the
Users name. And, because it is sorted, it means searching for a name
is a lot faster because all names starting with a “J” will be right
next to each other in the index!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2846"
} |
Q: Windows Help files - what are the options? Back in the old days, Help was not trivial but possible: generate some funky .rtf file with special tags, run it through a compiler, and you got a WinHelp file (.hlp) that actually works really well.
Then, Microsoft decided that WinHelp was not hip and cool anymore and switched to CHM, up to the point they actually axed WinHelp from Vista.
Now, CHM maybe nice, but everyone that tried to open a .chm file on the Network will know the nice "Navigation to the webpage was canceled" screen that is caused by security restrictions.
While there are ways to make CHM work off the network, this is hardly a good choice, because when a user presses the Help Button he wants help and not have to make some funky settings.
Bottom Line: I find CHM absolutely unusable. But with WinHelp not being an option anymore either, I wonder what the alternatives are, especially when it comes to integrate with my Application (i.e. for WinHelp and CHM there are functions that allow you to directly jump to a topic)?
PDF has the disadvantage of requiring the Adobe Reader (or one of the more lightweight ones that not many people use). I could live with that seeing as this is kind of standard nowadays, but can you tell it reliably to jump to a given page/anchor?
HTML files seem to be the best choice, you then just have to deal with different browsers (CSS and stuff).
Edit: I am looking to create my own Help Files. As I am a fan of the "No Setup, Just Extract and Run" Philosophy, i had that problem many times in the past because many of my users will run it off the network, which causes exactly this problem.
So i am looking for a more robust and future-proof way to provide help to my users without having to code a different help system for each application i make.
CHM is a really nice format, but that Security Stuff makes it unusable, as a Help system is supposed to provide help to the user, not to generate even more problems.
A: I don't like the html option, and actually moved from plain HTML to CHM by compressing and indexing them. Even use them on a handful of non-Windows customers even.
It simply solved the constant little breakage of people putting it on the network (nesting depth limited, strange locking effects), antivirus that died in directories with 30000 html files, and 20 minutes decompression time while installing on an older system, browser safety zones and features, miscalculations of needed space in the installer etc.
And then I don't even include the people that start "correcting" them, 3rd party product with faulty "integration" attempts etc, complaints about slowliness (browser start-up)
We all had waited years for the problems to go away as OSes and hardware improved, but the problems kept recurring in a bedazzling number of varieties and enough was enough. We found chmlib, and decided we could forever use something based on this as escape with a simple external reader, if the OS provided ones stopped working and switched.
Meanwhile we also have an own compiler, so we are MS free future-proof. That doesn't mean we never will change (solutions with local web-servers seem favourite nowadays), but at least we have a choice.
A: Is the question how to generate your own help files, or what is the best help file format?
Personally, I find CHM to be excellent. One of the first things I do when setting up a machine is to download the PHP Manual in CHM format (http://www.php.net/download-docs.php) and add a hotkey to it in Crimson Editor. So when I press F1 it loads the CHM and performs a search for the word my cursor is on (great for quick function reference).
A: Our software is both distributed locally to the clients and served from a network share. We opted for generating both a CHM file and a set of HTML files for serving from the network. Users starting the program locally use the CHM file, and users getting their program served from a network share has to use the HTML files.
We use Help and Manual and can thus easily produce both types of output from the same source project. The HTML files also contain searching capabilities and doesn't require a web server, so though it isn't an optimal solution, works fine.
So far all the single-file types for Windows seems broken in one way or another:
*
*WinHelp - obsoleted
*HtmlHelp (CHM) - obsoleted on Vista, doesn't work from network share, other than that works really nice
*Microsoft Help 2 (HXS) - this seems to work right up until the point when it doesn't, corrupted indexes or similar, this is used by Visual Studio 2005 and above, as an example
A: If you don't want to use an installer and you don't want the user to perform any extra steps to allow CHM files over the network, why not fall back to WinHelp? Vista does not include WinHlp32.exe out of the box, but it is freely available as a download for both Vista and Server 2008.
A: It depends on how import the online documentation is to your product, a good documentation infrastructure can be complex to establish but once done it pays off. Here is how we do it -
*
*Help source DITA compilant XML, stored in SCC (ClearCase).
*Help editing XMetal
*Help compilation, customized Open DITA Toolkit, with custom Perl/Java preprocessing
*Help source cross references applications resources at compile time, .RC files etc
*Help deliverables from single source, PDF, CHM, Eclipse Help, HTML.
*Single source repository produces help for multiple products 10+ with thousands of shared topics.
From what you describe I would look at Eclipse Help, its not simple to integrate into .NET or MFC applications, you basically have to do the help mapping to resolve the request to a URL then fire the URL to Eclipse Help wrapper or a browser.
A: If you are doing "just extract and run", you are going to run in security issues. This is especially true if you are users are running Vista (or later). is there a reason why you wanted to avoid packaging your applications inside an installer? Using an installer would alleviate the "external source" problem. You would be able to use .chm files without any problems.
We use InstallAware to create our install packages. It's not cheap, but is very good. If cost is your concern, WIX is open source and pretty robust. WIX does have a learning curve, but it's easy to work with.
A:
PDF has the disadvantage of requiring the Adobe Reader
I use Foxit Reader on Windows at home and at work. A lot smaller and very quick to open. Very handy when you are wondering what exactly a80000326.pdf is and why it is clogging up your documents folder.
A: I think the solution we're going to end up going with for our application is hosting the help files ourselves. This gives us immediate access to the files and the ability to keep them up to date.
What I plan is to have the content loaded into a huge series of XML files, each one containing help for a specific item. This XML would contain links to other XML files. We would use XSLT to display the contents as necessary.
Depending on the licensing, we may build a client-specific XSLT file in order to tailor the look and feel to what they need. We may need to be able to only show help for particular versions of our product as well and that can be done by filtering out stuff in the XSLT.
A: HTML would be the next best choice, ONLY IF you would serve them from a public web server. If you tried to bundle it with your app, all the files (and images (and stylesheets (and ...) ) ) would make CHM look like a gift from gods.
That said, when actually bundled in the installation package, (instead of being served over the network), I found the CHM files to work nicely.
OTOH, another pitfall about CHM files: Even if you try to open a CHM file on a local disk, you may bump into the security block if you initially downloaded it from somewhere, because the file could be marked as "came from external source" when it was obtained.
A: I use a commercial package called AuthorIT that can generate a number of different formats, such as chm, html, pdf, word, windows help, xml, xhtml, and some others I have never heard of (does dita ring a bell?).
It is a content management system oriented towards the needs of technical documentation writers.
The advantage is that you can use and re-use the same content to build a set of guides, and then generate them in different formats.
So the bottom line relative to the question of choosing chm or html or whatever is that if you are using this you are not locked into a given format, but you can provide several among which the user can choose, and you can even add more formats as you go along, at no extra cost.
If you just have one guide to create it won't be worth your while, but if you have a documentation set to manage then it is the best to my knowledge. Their support is very helpful also.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: Why am I getting a double free or corruption error with realloc()? I've tried to write a string replace function in C, which works on a char *, which has been allocated using malloc(). It's a little different in that it will find and replace strings, rather than characters in the starting string.
It's trivial to do if the search and replace strings are the same length (or the replace string is shorter than the search string), since I have enough space allocated. If I try to use realloc(), I get an error that tells me I am doing a double free - which I don't see how I am, since I am only using realloc().
Perhaps a little code will help:
void strrep(char *input, char *search, char *replace) {
int searchLen = strlen(search);
int replaceLen = strlen(replace);
int delta = replaceLen - searchLen;
char *find = input;
while (find = strstr(find, search)) {
if (delta > 0) {
realloc(input, strlen(input) + delta);
find = strstr(input, search);
}
memmove(find + replaceLen, find + searchLen, strlen(input) - (find - input));
memmove(find, replace, replaceLen);
}
}
The program works, until I try to realloc() in an instance where the replaced string will be longer than the initial string. (It still kind of works, it just spits out errors as well as the result).
If it helps, the calling code looks like:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
void strrep(char *input, char *search, char *replace);
int main(void) {
char *input = malloc(81);
while ((fgets(input, 81, stdin)) != NULL) {
strrep(input, "Noel", "Christmas");
}
}
A: Just a shot in the dark because I haven't tried it yet but when you realloc it returns the pointer much like malloc. Because realloc can move the pointer if needed you are most likely operating on an invalid pointer if you don't do the following:
input = realloc(input, strlen(input) + delta);
A: Someone else apologized for being late to the party - two and a half months ago. Oh well, I spend quite a lot of time doing software archaeology.
I'm interested that no-one has commented explicitly on the memory leak in the original design, or the off-by-one error. And it was observing the memory leak that tells me exactly why you are getting the double-free error (because, to be precise, you are freeing the same memory multiple times - and you are doing so after trampling over the already freed memory).
Before conducting the analysis, I'll agree with those who say your interface is less than stellar; however, if you dealt with the memory leak/trampling issues and documented the 'must be allocated memory' requirement, it could be 'OK'.
What are the problems? Well, you pass a buffer to realloc(), and realloc() returns you a new pointer to the area you should use - and you ignore that return value. Consequently, realloc() has probably freed the original memory, and then you pass it the same pointer again, and it complains that you're freeing the same memory twice because you pass the original value to it again. This not only leaks memory, but means that you are continuing to use the original space -- and John Downey's shot in the dark points out that you are misusing realloc(), but doesn't emphasize how severely you are doing so. There's also an off-by-one error because you do not allocate enough space for the NUL '\0' that terminates the string.
The memory leak occurs because you do not provide a mechanism to tell the caller about the last value of the string. Because you kept trampling over the original string plus the space after it, it looks like the code worked, but if your calling code freed the space, it too would get a double-free error, or it might get a core dump or equivalent because the memory control information is completely scrambled.
Your code also doesn't protect against indefinite growth -- consider replacing 'Noel' with 'Joyeux Noel'. Every time, you would add 7 characters, but you'd find another Noel in the replaced text, and expand it, and so on and so forth. My fixup (below) does not address this issue - the simple solution is probably to check whether the search string appears in the replace string; an alternative is to skip over the replace string and continue the search after it. The second has some non-trivial coding issues to address.
So, my suggested revision of your called function is:
char *strrep(char *input, char *search, char *replace) {
int searchLen = strlen(search);
int replaceLen = strlen(replace);
int delta = replaceLen - searchLen;
char *find = input;
while ((find = strstr(find, search)) != 0) {
if (delta > 0) {
input = realloc(input, strlen(input) + delta + 1);
find = strstr(input, search);
}
memmove(find + replaceLen, find + searchLen, strlen(input) + 1 - (find - input));
memmove(find, replace, replaceLen);
}
return(input);
}
This code does not detect memory allocation errors - and probably crashes (but if not, leaks memory) if realloc() fails. See Steve Maguire's 'Writing Solid Code' book for an extensive discussion of memory management issues.
A: Note, try to edit your code to get rid of the html escape codes.
Well, though it has been a while since I used C/C++, realloc that grows only reuses the memory pointer value if there is room in memory after your original block.
For instance, consider this:
(xxxxxxxxxx..........)
If your pointer points to the first x, and . means free memory location, and you grow the memory size pointed to by your variable by 5 bytes, it'll succeed. This is of course a simplified example as blocks are rounded up to a certain size for alignment, but anyway.
However, if you subsequently try to grow it by another 10 bytes, and there is only 5 available, it will need to move the block in memory and update your pointer.
However, in your example you are passing the function a pointer to the character, not a pointer to your variable, and thus while the strrep function internally might be able to adjust the variable in use, it is a local variable to the strrep function and your calling code will be left with the original pointer variable value.
This pointer value, however, has been freed.
In your case, input is the culprit.
However, I would make another suggestion. In your case it looks like the input variable is indeed input, and if it is, it shouldn't be modified, at all.
I would thus try to find another way to do what you want to do, without changing input, as side-effects like this can be hard to track down.
A: This seems to work;
char *strrep(char *string, const char *search, const char *replace) {
char *p = strstr(string, search);
if (p) {
int occurrence = p - string;
int stringlength = strlen(string);
int searchlength = strlen(search);
int replacelength = strlen(replace);
if (replacelength > searchlength) {
string = (char *) realloc(string, strlen(string)
+ replacelength - searchlength + 1);
}
if (replacelength != searchlength) {
memmove(string + occurrence + replacelength,
string + occurrence + searchlength,
stringlength - occurrence - searchlength + 1);
}
strncpy(string + occurrence, replace, replacelength);
}
return string;
}
Sigh, is there anyway to post code without it sucking?
A: realloc is strange, complicated and should only be used when dealing with lots of memory lots of times per second. i.e. - where it actually makes your code faster.
I have seen code where
realloc(bytes, smallerSize);
was used and worked to resize the buffer, making it smaller. Worked about a million times, then for some reason realloc decided that even if you were shortening the buffer, it would give you a nice new copy. So you crash in a random place 1/2 a second after the bad stuff happened.
Always use the return value of realloc.
A: As a general rule, you should never do a free or realloc on a user provided buffer. You don't know where the user allocated the space (in your module, in another DLL) so you cannot use any of the allocation functions on a user buffer.
Provided that you now cannot do any reallocation within your function, you should change its behavior a little, like doing only one replacement, so the user will be able to compute the resulting string max length and provide you with a buffer long enough for this one replacement to occur.
Then you could create another function to do the multiple replacements, but you will have to allocate the whole space for the resulting string and copy the user input string. Then you must provide a way to delete the string you allocated.
Resulting in:
void strrep(char *input, char *search, char *replace);
char* strrepm(char *input, char *search, char *replace);
void strrepmfree(char *input);
A: First off, sorry I'm late to the party. This is my first stackoverflow answer. :)
As has been pointed out, when realloc() is called, you can potentially change the pointer to the memory being reallocated. When this happens, the argument "string" becomes invalid. Even if you reassign it, the change goes out of scope once the function ends.
To answer the OP, realloc() returns a pointer to the newly-reallocated memory. The return value needs to be stored somewhere. Generally, you would do this:
data *foo = malloc(SIZE * sizeof(data));
data *bar = realloc(foo, NEWSIZE * sizeof(data));
/* Test bar for safety before blowing away foo */
if (bar != NULL)
{
foo = bar;
bar = NULL;
}
else
{
fprintf(stderr, "Crap. Memory error.\n");
free(foo);
exit(-1);
}
As TyBoer points out, you guys can't change the value of the pointer being passed in as the input to this function. You can assign whatever you want, but the change will go out of scope at the end of the function. In the following block, "input" may or may not be an invalid pointer once the function completes:
void foobar(char *input, int newlength)
{
/* Here, I ignore my own advice to save space. Check your return values! */
input = realloc(input, newlength * sizeof(char));
}
Mark tries to work around this by returning the new pointer as the output of the function. If you do that, the onus is on the caller to never again use the pointer he used for input. If it matches the return value, then you have two pointers to the same spot and only need to call free() on one of them. If they don't match, the input pointer now points to memory that may or may not be owned by the process. Dereferencing it could cause a segmentation fault.
You could use a double pointer for the input, like this:
void foobar(char **input, int newlength)
{
*input = realloc(*input, newlength * sizeof(char));
}
If the caller has a duplicate of the input pointer somewhere, that duplicate still might be invalid now.
I think the cleanest solution here is to avoid using realloc() when trying to modify the function caller's input. Just malloc() a new buffer, return that, and let the caller decide whether or not to free the old text. This has the added benefit of letting the caller keep the original string!
A: My quick hints.
Instead of:
void strrep(char *input, char *search, char *replace)
try:
void strrep(char *&input, char *search, char *replace)
and than in the body:
input = realloc(input, strlen(input) + delta);
Generally read about passing function arguments as values/reference and realloc() description :).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: How do I index a database column Hopefully, I can get answers for each database server.
For an outline of how indexing works check out: How does database indexing work?
A: The following is SQL92 standard so should be supported by the majority of RDMBS that use SQL:
CREATE INDEX [index name] ON [table name] ( [column name] )
A: Sql Server 2005 gives you the ability to specify a covering index. This is an index that includes data from other columns at the leaf level, so you don't have to go back to the table to get columns that aren't included in the index keys.
create nonclustered index my_idx on my_table (my_col1 asc, my_col2 asc) include (my_col3);
This is invaluable for a query that has my_col3 in the select list, and my_col1 and my_col2 in the where clause.
A: For python pytables, indexes don't have names and they are bound to single columns:
tables.columns.column_name.createIndex()
A: In SQL Server, you can do the following: (MSDN Link to full list of options.)
CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name
ON <object> ( column [ ASC | DESC ] [ ,...n ] )
[ INCLUDE ( column_name [ ,...n ] ) ]
[ WHERE <filter_predicate> ]
(ignoring some more advanced options...)
The name of each Index must be unique database wide.
All indexes can have multiple columns, and each column can be ordered in whatever order you want.
Clustered indexes are unique - one per table. They can't have INCLUDEd columns.
Nonclustered indexes are not unique, and can have up to 999 per table. They can have included columns, and where clauses.
A: To create indexes following stuff can be used:
*
*Creates an index on a table. Duplicate values are allowed:
CREATE INDEX index_name
ON table_name (column_name)
*Creates a unique index on a table. Duplicate values are not allowed:
CREATE UNIQUE INDEX index_name ON table_name (column_name)
*Clustered Index: CREATE CLUSTERED INDEX CL_ID ON SALES(ID);
*Non-clustered index:
CREATE NONCLUSTERED INDEX NONCI_PC ON SALES(ProductCode);
Refer: http://www.codeproject.com/Articles/190263/Indexes-in-MS-SQL-Server for details.
A: *
*CREATE INDEX name_index ON Employee (Employee_Name)
*On a multi column: CREATE INDEX name_index ON Employee (Employee_Name, Employee_Age)
A: Since most of the answers are given for SQL databases, I am writing this for NOSQL databases, specifically for MongoDB.
Below is the syntax to create an index in the MongoDB using mongo shell.
db.collection.createIndex( <key and index type specification>, <options> )
example - db.collection.createIndex( { name: -1 } )
In the above example an single key descending index is created on the name
field.
Keep in mind MongoDB indexes uses B-tree data structure.
There are multiple types of indexes we can create in mongodb, for more information refer to below link - https://docs.mongodb.com/manual/indexes/
A: An index is not always needed for all the databases. For eg: Kognitio aka WX2 engine doesn't offer a syntax for indexing as the database engine takes care of it implicitly. Data goes on via round-robin partitioning and Kognitio WX2 gets data on and off disk in the simplest possible way.
A: We can use following syntax to create index.
CREATE INDEX <index_name> ON <table_name>(<column_name>)
If we do not want duplicate value to be allowed then we can add UNIQUE while creating index as follow
CREATE UNIQUE INDEX <index_name> ON <table_name>(<column_name>)
We can create index on multiple column by giving multiple column name separated by ','
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66"
} |
Q: Use SVN Revision to label build in CCNET I am using CCNET on a sample project with SVN as my source control. CCNET is configured to create a build on every check in. CCNET uses MSBuild to build the source code.
I would like to use the latest revision number to generate AssemblyInfo.cs while compiling.
How can I retrieve the latest revision from subversion and use the value in CCNET?
Edit: I'm not using NAnt - only MSBuild.
A: CruiseControl.Net 1.4.4 has now an Assembly Version Labeller, which generates version numbers compatible with .Net assembly properties.
In my project I have it configured as:
<labeller type="assemblyVersionLabeller" incrementOnFailure="true" major="1" minor="2"/>
(Caveat: assemblyVersionLabeller won't start generating svn revision based labels until an actual commit-triggered build occurs.)
and then consume this from my MSBuild projects with MSBuildCommunityTasks.AssemblyInfo :
<Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets"/>
<Target Name="BeforeBuild">
<AssemblyInfo Condition="'$(CCNetLabel)' != ''" CodeLanguage="CS" OutputFile="Properties\AssemblyInfo.cs"
AssemblyTitle="MyTitle" AssemblyCompany="MyCompany" AssemblyProduct="MyProduct"
AssemblyCopyright="Copyright © 2009" ComVisible="false" Guid="some-random-guid"
AssemblyVersion="$(CCNetLabel)" AssemblyFileVersion="$(CCNetLabel)"/>
</Target>
For sake of completness, it's just as easy for projects using NAnt instead of MSBuild:
<target name="setversion" description="Sets the version number to CruiseControl.Net label.">
<script language="C#">
<references>
<include name="System.dll" />
</references>
<imports>
<import namespace="System.Text.RegularExpressions" />
</imports>
<code><![CDATA[
[TaskName("setversion-task")]
public class SetVersionTask : Task
{
protected override void ExecuteTask()
{
StreamReader reader = new StreamReader(Project.Properties["filename"]);
string contents = reader.ReadToEnd();
reader.Close();
string replacement = "[assembly: AssemblyVersion(\"" + Project.Properties["CCNetLabel"] + "\")]";
string newText = Regex.Replace(contents, @"\[assembly: AssemblyVersion\("".*""\)\]", replacement);
StreamWriter writer = new StreamWriter(Project.Properties["filename"], false);
writer.Write(newText);
writer.Close();
}
}
]]>
</code>
</script>
<foreach item="File" property="filename">
<in>
<items basedir="..">
<include name="**\AssemblyInfo.cs"></include>
</items>
</in>
<do>
<setversion-task />
</do>
</foreach>
</target>
A: I found this project on google code. This is CCNET plugin to generate the label in CCNET.
The DLL is tested with CCNET 1.3 but it works with CCNET 1.4 for me. I'm successfully using this plugin to label my build.
Now onto passing it to MSBuild...
A: If you prefer doing it on the MSBuild side over the CCNet config, looks like the MSBuild Community Tasks extension's SvnVersion task might do the trick.
A: I have written a NAnt build file that handles parsing SVN information and creating properties. I then use those property values for a variety of build tasks, including setting the label on the build. I use this target combined with the SVN Revision Labeller mentioned by lubos hasko with great results.
<target name="svninfo" description="get the svn checkout information">
<property name="svn.infotempfile" value="${build.directory}\svninfo.txt" />
<exec program="${svn.executable}" output="${svn.infotempfile}">
<arg value="info" />
</exec>
<loadfile file="${svn.infotempfile}" property="svn.info" />
<delete file="${svn.infotempfile}" />
<property name="match" value="" />
<regex pattern="URL: (?'match'.*)" input="${svn.info}" />
<property name="svn.info.url" value="${match}"/>
<regex pattern="Repository Root: (?'match'.*)" input="${svn.info}" />
<property name="svn.info.repositoryroot" value="${match}"/>
<regex pattern="Revision: (?'match'\d+)" input="${svn.info}" />
<property name="svn.info.revision" value="${match}"/>
<regex pattern="Last Changed Author: (?'match'\w+)" input="${svn.info}" />
<property name="svn.info.lastchangedauthor" value="${match}"/>
<echo message="URL: ${svn.info.url}" />
<echo message="Repository Root: ${svn.info.repositoryroot}" />
<echo message="Revision: ${svn.info.revision}" />
<echo message="Last Changed Author: ${svn.info.lastchangedauthor}" />
</target>
A: I am currently "manually" doing it through a prebuild-exec Task, using my cmdnetsvnrev tool, but if someone knows a better ccnet-integrated way of doing it, i'd be happy to hear :-)
A:
Customizing csproj files to autogenerate AssemblyInfo.cs
http://www.codeproject.com/KB/dotnet/Customizing_csproj_files.aspx
Every time we create a new C# project,
Visual Studio puts there the
AssemblyInfo.cs file for us. The file
defines the assembly meta-data like
its version, configuration, or
producer.
Found the above technique to auto-gen AssemblyInfo.cs using MSBuild. Will post sample shortly.
A: I'm not sure if this work with CCNET or not, but I've created an SVN version plug-in for the Build Version Increment project on CodePlex. This tool is pretty flexible and can be set to automatically create a version number for you using the svn revision. It doesn't require writing any code or editing xml, so yay!
I hope this is helps!
A: My approach is to use the aforementioned plugin for ccnet and a nant echo task to generate a VersionInfo.cs file containing nothing but the version attributes. I only have to include the VersionInfo.cs file into the build
The echo task simply outputs the string I give it to a file.
If there is a similar MSBuild task, you can use the same approach. Here's the small nant task I use:
<target name="version" description="outputs version number to VersionInfo.cs">
<echo file="${projectdir}/Properties/VersionInfo.cs">
[assembly: System.Reflection.AssemblyVersion("$(CCNetLabel)")]
[assembly: System.Reflection.AssemblyFileVersion("$(CCNetLabel)")]
</echo>
</target>
Try this:
<ItemGroup>
<VersionInfoFile Include="VersionInfo.cs"/>
<VersionAttributes>
[assembly: System.Reflection.AssemblyVersion("${CCNetLabel}")]
[assembly: System.Reflection.AssemblyFileVersion("${CCNetLabel}")]
</VersionAttributes>
</ItemGroup>
<Target Name="WriteToFile">
<WriteLinesToFile
File="@(VersionInfoFile)"
Lines="@(VersionAttributes)"
Overwrite="true"/>
</Target>
Please note that I'm not very intimate with MSBuild, so my script will probably not work out-of-the-box and need corrections...
A: Based on skolimas solution I updated the NAnt script to also update the AssemblyFileVersion. Thanks to skolima for the code!
<target name="setversion" description="Sets the version number to current label.">
<script language="C#">
<references>
<include name="System.dll" />
</references>
<imports>
<import namespace="System.Text.RegularExpressions" />
</imports>
<code><![CDATA[
[TaskName("setversion-task")]
public class SetVersionTask : Task
{
protected override void ExecuteTask()
{
StreamReader reader = new StreamReader(Project.Properties["filename"]);
string contents = reader.ReadToEnd();
reader.Close();
// replace assembly version
string replacement = "[assembly: AssemblyVersion(\"" + Project.Properties["label"] + "\")]";
contents = Regex.Replace(contents, @"\[assembly: AssemblyVersion\("".*""\)\]", replacement);
// replace assembly file version
replacement = "[assembly: AssemblyFileVersion(\"" + Project.Properties["label"] + "\")]";
contents = Regex.Replace(contents, @"\[assembly: AssemblyFileVersion\("".*""\)\]", replacement);
StreamWriter writer = new StreamWriter(Project.Properties["filename"], false);
writer.Write(contents);
writer.Close();
}
}
]]>
</code>
</script>
<foreach item="File" property="filename">
<in>
<items basedir="${srcDir}">
<include name="**\AssemblyInfo.cs"></include>
</items>
</in>
<do>
<setversion-task />
</do>
</foreach>
</target>
A: No idea where I found this. But I found this on the internet "somewhere".
This updates all the AssemblyInfo.cs files before the build takes place.
Works like a charm. All my exe's and dll's show up as 1.2.3.333 (If "333" were the SVN revision at the time.) (And the original version in the AssemblyInfo.cs file was listed as "1.2.3.0")
$(ProjectDir) (Where my .sln file resides)
$(SVNToolPath) (points to svn.exe)
are my custom variables, their declarations/definitions are not defined below.
http://msbuildtasks.tigris.org/
and/or
https://github.com/loresoft/msbuildtasks
has the ( FileUpdate and SvnVersion ) tasks.
<Target Name="SubVersionBeforeBuildVersionTagItUp">
<ItemGroup>
<AssemblyInfoFiles Include="$(ProjectDir)\**\*AssemblyInfo.cs" />
</ItemGroup>
<SvnVersion LocalPath="$(MSBuildProjectDirectory)" ToolPath="$(SVNToolPath)">
<Output TaskParameter="Revision" PropertyName="MySubVersionRevision" />
</SvnVersion>
<FileUpdate Files="@(AssemblyInfoFiles)"
Regex="(\d+)\.(\d+)\.(\d+)\.(\d+)"
ReplacementText="$1.$2.$3.$(MySubVersionRevision)" />
</Target>
EDIT --------------------------------------------------
The above may start failing after your SVN revision number reaches 65534 or higher.
See:
Turn off warning CS1607
Here is the workaround.
<FileUpdate Files="@(AssemblyInfoFiles)"
Regex="AssemblyFileVersion\("(\d+)\.(\d+)\.(\d+)\.(\d+)"
ReplacementText="AssemblyFileVersion("$1.$2.$3.$(SubVersionRevision)" />
The result of this should be:
In Windows/Explorer//File/Properties…….
Assembly Version will be 1.0.0.0.
File Version will be 1.0.0.333 if 333 is the SVN revision.
A: You have basically two options. Either you write a simple script that will start and parse output from
svn.exe info --revision HEAD
to obtain revision number (then generating AssemblyInfo.cs is pretty much straight forward) or just use plugin for CCNET. Here it is:
SVN Revision Labeller is a plugin for
CruiseControl.NET that allows you to
generate CruiseControl labels for your
builds, based upon the revision number
of your Subversion working copy. This
can be customised with a prefix and/or
major/minor version numbers.
http://code.google.com/p/svnrevisionlabeller/
I prefer the first option because it's only roughly 20 lines of code:
using System;
using System.Diagnostics;
namespace SvnRevisionNumberParserSample
{
class Program
{
static void Main()
{
Process p = Process.Start(new ProcessStartInfo()
{
FileName = @"C:\Program Files\SlikSvn\bin\svn.exe", // path to your svn.exe
UseShellExecute = false,
RedirectStandardOutput = true,
Arguments = "info --revision HEAD",
WorkingDirectory = @"C:\MyProject" // path to your svn working copy
});
// command "svn.exe info --revision HEAD" will produce a few lines of output
p.WaitForExit();
// our line starts with "Revision: "
while (!p.StandardOutput.EndOfStream)
{
string line = p.StandardOutput.ReadLine();
if (line.StartsWith("Revision: "))
{
string revision = line.Substring("Revision: ".Length);
Console.WriteLine(revision); // show revision number on screen
break;
}
}
Console.Read();
}
}
}
A: Be careful. The structure used for build numbers is only a short so you have a ceiling on how high your revision can go.
In our case, we've already exceeded the limit.
If you attempt to put in the build number 99.99.99.599999, the file version property will actually come out as 99.99.99.10175.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
} |
Q: What is the most efficient graph data structure in Python? I need to be able to manipulate a large (10^7 nodes) graph in python. The data corresponding to each node/edge is minimal, say, a small number of strings. What is the most efficient, in terms of memory and speed, way of doing this?
A dict of dicts is more flexible and simpler to implement, but I intuitively expect a list of lists to be faster. The list option would also require that I keep the data separate from the structure, while dicts would allow for something of the sort:
graph[I][J]["Property"]="value"
What would you suggest?
Yes, I should have been a bit clearer on what I mean by efficiency. In this particular case I mean it in terms of random access retrieval.
Loading the data in to memory isn't a huge problem. That's done once and for all. The time consuming part is visiting the nodes so I can extract the information and measure the metrics I'm interested in.
I hadn't considered making each node a class (properties are the same for all nodes) but it seems like that would add an extra layer of overhead? I was hoping someone would have some direct experience with a similar case that they could share. After all, graphs are one of the most common abstractions in CS.
A: As already mentioned, NetworkX is very good, with another option being igraph. Both modules will have most (if not all) the analysis tools you're likely to need, and both libraries are routinely used with large networks.
A: I would strongly advocate you look at NetworkX. It's a battle-tested war horse and the first tool most 'research' types reach for when they need to do analysis of network based data. I have manipulated graphs with 100s of thousands of edges without problem on a notebook. Its feature rich and very easy to use. You will find yourself focusing more on the problem at hand rather than the details in the underlying implementation.
Example of Erdős-Rényi random graph generation and analysis
"""
Create an G{n,m} random graph with n nodes and m edges
and report some properties.
This graph is sometimes called the Erd##[m~Qs-Rényi graph
but is different from G{n,p} or binomial_graph which is also
sometimes called the Erd##[m~Qs-Rényi graph.
"""
__author__ = """Aric Hagberg ([email protected])"""
__credits__ = """"""
# Copyright (C) 2004-2006 by
# Aric Hagberg
# Dan Schult
# Pieter Swart
# Distributed under the terms of the GNU Lesser General Public License
# http://www.gnu.org/copyleft/lesser.html
from networkx import *
import sys
n=10 # 10 nodes
m=20 # 20 edges
G=gnm_random_graph(n,m)
# some properties
print "node degree clustering"
for v in nodes(G):
print v,degree(G,v),clustering(G,v)
# print the adjacency list to terminal
write_adjlist(G,sys.stdout)
Visualizations are also straightforward:
More visualization: http://jonschull.blogspot.com/2008/08/graph-visualization.html
A: A dictionary may also contain overhead, depending on the actual implementation. A hashtable usually contain some prime number of available nodes to begin with, even though you might only use a couple of the nodes.
Judging by your example, "Property", would you be better of with a class approach for the final level and real properties? Or is the names of the properties changing a lot from node to node?
I'd say that what "efficient" means depends on a lot of things, like:
*
*speed of updates (insert, update, delete)
*speed of random access retrieval
*speed of sequential retrieval
*memory used
I think that you'll find that a data structure that is speedy will generally consume more memory than one that is slow. This isn't always the case, but most data structures seems to follow this.
A dictionary might be easy to use, and give you relatively uniformly fast access, it will most likely use more memory than, as you suggest, lists. Lists, however, generally tend to contain more overhead when you insert data into it, unless they preallocate X nodes, in which they will again use more memory.
My suggestion, in general, would be to just use the method that seems the most natural to you, and then do a "stress test" of the system, adding a substantial amount of data to it and see if it becomes a problem.
You might also consider adding a layer of abstraction to your system, so that you don't have to change the programming interface if you later on need to change the internal data structure.
A: As I understand it, random access is in constant time for both Python's dicts and lists, the difference is that you can only do random access of integer indexes with lists. I'm assuming that you need to lookup a node by its label, so you want a dict of dicts.
However, on the performance front, loading it into memory may not be a problem, but if you use too much you'll end up swapping to disk, which will kill the performance of even Python's highly efficient dicts. Try to keep memory usage down as much as possible. Also, RAM is amazingly cheap right now; if you do this kind of thing a lot, there's no reason not to have at least 4GB.
If you'd like advice on keeping memory usage down, give some more information about the kind of information you're tracking for each node.
A: Making a class-based structure would probably have more overhead than the dict-based structure, since in python classes actually use dicts when they are implemented.
A: Even though this question is now quite old, I think it is worthwhile to mention my own python module for graph manipulation called graph-tool. It is very efficient, since the data structures and algorithms are implemented in C++, with template metaprograming, using the Boost Graph Library. Therefore its performance (both in memory usage and runtime) is comparable to a pure C++ library, and can be orders of magnitude better than typical python code, without sacrificing ease of use. I use it myself constantly to work with very large graphs.
A: No doubt NetworkX is the best data structure till now for graph. It comes with utilities like Helper Functions, Data Structures and Algorithms, Random Sequence Generators, Decorators, Cuthill-Mckee Ordering, Context Managers
NetworkX is great because it wowrs for graphs, digraphs, and multigraphs. It can write graph with multiple ways: Adjacency List, Multiline Adjacency List,
Edge List, GEXF, GML. It works with Pickle, GraphML, JSON, SparseGraph6 etc.
It has implimentation of various radimade algorithms including:
Approximation, Bipartite, Boundary, Centrality, Clique, Clustering, Coloring, Components, Connectivity, Cycles, Directed Acyclic Graphs,
Distance Measures, Dominating Sets, Eulerian, Isomorphism, Link Analysis, Link Prediction, Matching, Minimum Spanning Tree, Rich Club, Shortest Paths, Traversal, Tree.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "73"
} |
Q: How to make subdomain user accounts in a webapp I am looking to allow users to control of subdomain of an app I am toying with, much like Basecamp where it is customusername.seework.com.
What is required on the DNS end to allow these to be created dynamically and be available instantly.
And how do you recommend dealing with this in the logic of the site? Htaccess rule to lookup the subdomain in the DB?
A: Don't worry about DNS and URL rewriting
Your DNS record will be static, something like:
*.YOURDOMAIN.COM A 123.123.123.123
Ask your DNS provider to do it for you (if it's not done already) or do it by yourself if you have control over your DNS records. This will automatically point all your subdomains (current and future ones) into the same HTTP server.
Once it's done, you will only need to parse HOST header on every single http request to detect what hostname was used to access your server-side scripts on your http server.
Assuming you're using ASP.NET, this is kind of silly example I came up with but works and demonstrates simplicity of this approach:
<%@ Language="C#" %>
<%
string subDomain = Request.Url.Host.Split('.')[0].ToUpper();
if (subDomain == "CLIENTXXX") Response.Write("Hello CLIENTXXX, your secret number is 33");
else if (subDomain == "CLIENTYYY") Response.Write("Hello CLIENTYYY, your secret number is 44");
else Response.Write(subDomain+" doesn't exist");
%>
A: The trick to that is to use URL rewriting so that name.domain.com transparently maps to something like domain.com/users/name on your server. Once you start down that path, it's fairly trivial to implement.
A: The way we do this is to have a 'catch all' for our domain name registered in DNS so that anything.ourdomain.com will point to our server.
With Apache you can set up a similar catch-all for your vhosts. The ServerName must be a single static name but the ServerAlias directive can contain a pattern.
Servername www.ourdomain.com
ServerAlias *.ourdomain.com
Now all of the domains will trigger the vhost for our project. The final part is to decode the domain name actually used so that you can work out the username in your code, something like (PHP):
list( $username ) = explode( ".", $_SERVER[ "HTTP_HOST" ] );
or a RewriteRule as already suggested that silently maps user.ourdomain.com/foo/bar to www.ourdomain.com/foo/bar?user=user or whatever you prefer.
A: I was looking to do something similar (www.mysite.com/SomeUser).
What I did was I edited 404.shtml to include this server side include (SSI) code:
<!--#include virtual="404.php" -- >
Then I created the file 404.php, where I parsed the URL to check for a user's name and showed their info from the database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: ViewState invalid only in Safari One of the sites I maintain relies heavily on the use of ViewState (it isn't my code). However, on certain pages where the ViewState is extra-bloated, Safari throws a "Validation of viewstate MAC failed" error.
This appears to only happen in Safari. Firefox, IE and Opera all load successfully in the same scenario.
A: While I second the Channel 9 solution, also be aware that in some hosted environments Safari is not considered an up-level browser. You may need to add it to your application's browscap in order to make use of some ASP.Net features.
That was the root cause of some headaches we had for a client's site that used the ASP Menu control.
A: My first port of call would be to go through the elements on the page and see which controls:
*
*Will still work when I switch ViewState off
*Can be moved out of the page and into an AJAX call to be loaded when required
Failing that, and here's the disclaimer - I've never used this solution on a web-facing site - but in the past where I've wanted to eliminate massive ViewStates in limited-audience applications I have stored the ViewState in the Session.
It has worked for me because the hit to memory isn't significant for the number of users, but if you're running a fairly popular site I wouldn't recommend this approach. However, if the Session solution works for Safari you could always detect the user agent and fudge appropriately.
A: I've been doing a little research into this and whilst I'm not entirely sure its the cause I believe it is because Safari is not returning the full result set (hence cropping it).
I have been in dicussion with another developer and found the following post on Channel 9 as well which recommends making use of the SQL State service to store the viewstate avoiding the postback issue and also page size.
http://channel9.msdn.com/forums/TechOff/250549-ASPNET-ViewState-flawed-architecture/?CommentID=270477#263702
Does this seem like the best solution?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: How do I hyperlink to a specific slide of a .ppt file? I want to link to a specific slide in an online PowerPoint file, (e.g. http://www.example.com/hello.ppt) but what I want is that when people click on my link, it goes straight to the nth slide.
Is this possible?
A: Also, if you want a link to the nth slide within the same file (to automate the 'Edit Hyperlink' > 'Place in This Document' action) then omit the filename and only set it to "#n".
A: Link to
http://www.whatever.com/hello.ppt#4
for slide 4 to be opened initially.
Take a look at this site:
http://www.tinyempire.com/shortnotes/files/specific_slide_in_ppt.htm
A: I found the answer. all you have to do is add #n after .ppt.
For example, http://www.whatever.com/hello.ppt#4 will take you straight to the 4th slide.
A: The following method applies to any online version of PowerPoint (e.g. Office 365).
*
*Open the powerpoint file in your browser (the link could look like this: http://YourCompany.sharepoint.com/:p:/r/sites/SiteName/...)
*In the thumbnails list (on the left), right-click on the slide you want to refer to and choose "link to this slide" (free translation, my version is in Dutch "Koppeling naar deze dia")
*In the pop-up screen, you can choose to share a read-only or an editable version
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Regex: To pull out a sub-string between two tags in a string I have a file in the following format:
Data Data
Data
[Start]
Data I want
[End]
Data
I'd like to grab the Data I want from between the [Start] and [End] tags using a Regex. Can anyone show me how this might be done?
A: \[start\](.*?)\[end\]
Zhich'll put the text in the middle within a capture.
A: $text ="Data Data Data start Data i want end Data";
($content) = $text =~ m/ start (.*) end /;
print $content;
I had a similar problem for a while & I can tell you this method works...
A: A more complete discussion of the pitfalls of using a regex to find matching tags can be found at: http://faq.perl.org/perlfaq4.html#How_do_I_find_matchi. In particular, be aware that nesting tags really need a full-fledged parser in order to be interpreted correctly.
Note that case sensitivity will need to be turned off in order to answer the question as stated. In perl, that's the i modifier:
$ echo "Data Data Data [Start] Data i want [End] Data" \
| perl -ne '/\[start\](.*?)\[end\]/i; print "$1\n"'
Data i want
The other trick is to use the *? quantifier which turns off the greediness of the captured match. For instance, if you have a non-matching [end] tag:
Data Data [Start] Data i want [End] Data [end]
you probably don't want to capture:
Data i want [End] Data
A: While you can use a regular expression to parse the data between opening and closing tags, you need to think long and hard as to whether this is a path you want to go down. The reason for it is the potential of tags to nest: if nesting tags could ever happen or may ever happen, the language is said to no longer be regular, and regular expressions cease to be the proper tool for parsing it.
Many regular expression implementations, such as PCRE or perl's regular expressions, support backtracking which can be used to achieve this rough effect. But PCRE (unlike perl) doesn't support unlimited backtracking, and this can actually cause things to break in weird ways as soon as you have too many tags.
There's a very commonly cited blog post that discusses this more, http://kore-nordmann.de/blog/do_NOT_parse_using_regexp.html (google for it and check the cache currently, they seem to be having some downtime)
A: Well, if you guarantee that each start tag is followed by an end tag then the following would work.
\[start\](.*?)\[end\]
However, If you have complex text such as the follwoing:
[start] sometext [start] sometext2 [end] sometext [end]
then you would run into problems with regex.
Now the following example will pull out all the hot links in a page:
'/<a(.*?)a>/i'
In the above case we can guarantee that there would not be any nested cases of:
'<a></a>'
So, this is a complex question and can't just be solved with a simple answer.
A: \[start\]\s*(((?!\[start\]|\[end\]).)+)\s*\[end\]
This should hopefully drop the [start] and [end] markers as well.
A: With Perl you can surround the data you want with ()'s and pull it out later, perhaps other languages have a similar feature.
if ($s_output =~ /(data data data data START(data data data)END (data data)/)
{
$dataAllOfIt = $1; # 1 full string
$dataInMiddle = $2; # 2 Middle Data
$dataAtEnd = $3; # 3 End Data
}
A: Refer to this question to pull out text between tags with space characters and dots (.)
[\S\s] is the one I used
Regex to match any character including new lines
A: Reading the text with in the square brackets [] i.e.[Start] and [End] and validate the array with a list of values. jsfiddle http://jsfiddle.net/muralinarisetty/r4s4wxj4/1/
var mergeFields = ["[sitename]",
"[daystoholdquote]",
"[expires]",
"[firstname]",
"[lastname]",
"[sitephonenumber]",
"[hoh_firstname]",
"[hoh_lastname]"];
var str = "fee [sitename] [firstname] \
sdfasd [lastname] ";
var res = validateMeargeFileds(str);
console.log(res);
function validateMeargeFileds(input) {
var re = /\[\w+]/ig;
var isValid;
var myArray = input.match(re);
try{
if (myArray.length > 0) {
myArray.forEach(function (field) {
isValid = isMergeField(field);
if (!isValid){
throw e;
}
});
}
}
catch(e) {
}
return isValid;
}
function isMergeField(mergefield) {
return mergeFields.indexOf(mergefield.toLowerCase()) > -1;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
} |
Q: Asynchronous multi-direction server-client communication over the same open socket? I have a client-server app where the client is on a Windows Mobile 6 device, written in C++ and the server is on full Windows and written in C#.
Originally, I only needed it to send messages from the client to the server, with the server only ever sending back an acknowledgement that it received the message. Now, I would like to update it so that the server can actually send a message to the client to request data. As I currently have it set up so the client is only in receive mode after it sends data to the server, this doesn't allow for the server to send a request at any time. I would have to wait for client data. My first thought would be to create another thread on the client with a separate open socket, listening for server requests...just like the server already has in respect the client. Is there a way, within the same thread and using the same socket, to all the server to send requests at any time?
Can you use something to the effect of WaitForMultipleObjects() and pass it a receive buffer and an event that tells it there is data to be sent?
A: When I needed to write an application with a client-server model where the clients could leave and enter whenever they want, (I assume that's also the case for your application as you use mobile devices) I made sure that the clients send an online message to the server, indicating they were connected and ready to do whatever they needed doing.
at that time the server could send messages back to the client trough the same open connection.
Also, but I don't know if that is applicable for you, I had some sort of heartbeat the clients sent to the server, letting it know it was still online. That way the server knows when a client was forcibly disconnected from the network and it could mark that client back as offline.
A: Using asynchronous communication is totally possible in single thread!
There is a common design pattern in network software development called the reactor pattern (look at this book). Some well known network library provides an implementation of this pattern (look at ACE).
Briefly, the reactor is an object, you register all your sockets inside, and you wait for something. If something happened (new data arrived, connection close...) the reactor will notify you. And of course, you can use only one socket to send and received data asynchronously.
A: I'm not clear on whether or not you're wanting to add the asynchronous bits to the server in C# or the client in C++.
If you're talking about doing this in C++, desktop Windows platforms can do socket I/O asynchronously through the API's that use overlapped I/O. For sockets, WSASend, WSARecv both allow async I/O (read the documentation on their LPOVERLAPPED parameters, which you can populate with events that get set when the I/O completes).
I don't know if Windows Mobile platforms support these functions, so you might have to do some additional digging.
A: Check out asio. It is a cross compatable c++ library for asyncronous IO. I am not sure if this would be useful for the server ( I have never tried to link a standard c++ DLL to a c# project) but for the client it would be useful.
We use it with our application, and it solved most of our IO concurrency problems.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: What are the advantages of using SVN over CVS? My company is using CVS as our de-facto standard for source control. However, I've heard a lot of people say that SVN is better.
I know SVN is newer, but other than that, I'm unfamiliar with its benefits.
What I'm looking for is a good, succinct comparison of the two systems, noting any advantages or disadvantages of each in a Java/Eclipse development environment.
A: The Subversion book has an appendix that details important differences from CVS, which may help you make your decision. The two approaches are more or less the same idea but SVN was specifically designed to fix long standing flaws in CVS so, in theory at least, SVN will always be the better choice.
A: CVS only tracks modification on a file-by-file basis, while SVN tracks a whole commit as a new revision, which means that it is easier to follow the history of your project. Add the fact that all modern source control software use the concept of revision so it is far easier to migrate from SVN than it is from CVS.
There is also the atomic commit problem. While I only encountered it once, it is possible that 2 people committing together in CVS can conflict each other, losing some data and putting your client in an inconsistent state. When detected early, these problems are not major because your data is still out there somewhere, but it can be a pain in a stressful environment.
And finally, not many tools are developed around CVS anymore. While the new and shiny-new tools like Git or Mercurial definitely lack tools yet, SVN has a pretty large application base on any system.
EDIT 2020: Seriously, this answer is 12 years old now. Forget SVN, go use Git like everyone else!
A: I'll second Eridius' suggestion of Git, but I'd expand it to the other DRCS (Distributed Revision Control System) such as Mercurial and bazaar.
These products are fairly recent and the level of tooling and integration with them seems low at the moment (based on my initial research). I'd say they were best suited to the power-developers out there (and on here ;-)).
On the other hand, what doesn't CVS currently do for you? From your initial question, you don't really have any, "CVS sucks at this, what could I use instead?"
You've gotta weigh up the costs of any potential migration against the benefits. For an existing project, I think that it would be hard to justify.
A: One thing not to overlook is ecosystem. I was working at a CVSNT shop, and I was finding more and more open source tools supported SubVersion by default.
A: btw: CVSNT supports atomic commits
A: As someone who is in the middle of switching between CVS and SVN (initially we switched all of our projects with cvs2svn and then decided that we would transition by only using svn on new projects), here are some of the problems we have had.
*
*Merging and branching are very different, and if you branch and merge frequently, unless you have SVN 1.5 running on your server have to know when you branched (this isn't very clear in the Tortoise SVN dialogs). Michael says the branching and merging is intuitive, I would argue that after using CVS for 10 years, it is not.
*If your are running the SVN server on Linux, it may be hard to get your SA to move to svn 1.5, as the default install 1.4.x.
*Merging conflicts is not nearly as easy or as clear (at least to me and my co-workers) in TortoiseSVN as it is in TortoiseCVS. The three pane approach takes some getting used to and the WinMerge (my preferred merge tool) doesn't do a three pane merge.
*Beware: many of the online tutorials and magazine articles I have read obviously don't branch and merge, you should set up your main repository as https://svn.yoursvnserver.com/repos/YourProject/Trunk and branches on https://svn.yoursvnserver.com/repos/YourProject/Branches/BranchX . You can clean up if you start your repos in the wrong place, but it leads to confusion.
A: One of the many comparisons:
http://wiki.scummvm.org/index.php/CVS_vs_SVN
Now this is very specific to that project, but a lot of stuff apllies in general.
Pro Subversion:
*
*Support for versioned renames/moves (impossible with CVS): Fingolfin, Ender
*Supports directories natively: It's possible to remove them, and they are versioned: Fingolfin, Ender
*File properties are versioned; no more "executable bit" hell: Fingolfin
*Overall revision number makes build versioning and regression testing much easier: Ender, Fingolfin
*Atomic commits: Fingolfin
*Intuitive (directory-based) branching and tagging: Fingolfin
*Easier hook scripts (pre/post commit, etc): SumthinWicked (I use it for Doxygen after commits)
*Prevents accidental committing of conflicted files: Salty-horse, Fingolfin
*Support for custom 'diff' command: Fingolfin
*Offline diffs, and they're instant: sev
A: SVN has 3 main advantages over CVS
*
*it's faster
*supports versioning of binary files
*and adds transactional commit (all or nothing)
A: You should take a look at Git instead of SVN. It's a DVCS that's blazing-fast and very powerful. It's not as user-friendly as SVN, but it's improving in that regard, and it's not that hard to learn.
A: CVS (Concurrent Versions System) and SVN (SubVersioN) are two version control file systems that are popularly used by teams who are collaborating on a single project. These systems allow the collaborators to keep track of the changes that are made and know who is developing which and whether a branch should be applied to the main trunk or not. CVS is the much older of the two and it has been the standard collaboration tool for a lot of people. SVN is much newer and it introduces a lot of improvements to address the demands of most people.
A: you might also choose to migrate only the latest code from CVS into SVN and freeze your current CVS repo. this will make migration easier and you might also build your legacy releases in the old CVS repo.
A: Well, a few things which i feel makes svn awesome.
*
*The SVN-Altassian crucible combination is a far superior method of reviews and quality checks
*Better management of conflicts and merges
*It's obviously faster for taking checkouts, performing commits, etc.
*The atomic commit problem - It is possible that 2 people committing together in CVS can conflict each other, losing some data and putting your code base in an inconsistent state
Migration can easily be done in a few hours using cvs2svn.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "66"
} |
Q: How big can a MySQL database get before performance starts to degrade At what point does a MySQL database start to lose performance?
*
*Does physical database size matter?
*Do number of records matter?
*Is any performance degradation linear or exponential?
I have what I believe to be a large database, with roughly 15M records which take up almost 2GB. Based on these numbers, is there any incentive for me to clean the data out, or am I safe to allow it to continue scaling for a few more years?
A: Also watch out for complex joins. Transaction complexity can be a big factor in addition to transaction volume.
Refactoring heavy queries sometimes offers a big performance boost.
A: A point to consider is also the purpose of the system and the data in the day to day.
For example, for a system with GPS monitoring of cars is not relevant query data from the positions of the car in previous months.
Therefore the data can be passed to other historical tables for possible consultation and reduce the execution times of the day to day queries.
A: Performance can degrade in a matter of few thousand rows if database is not designed properly.
If you have proper indexes, use proper engines (don't use MyISAM where multiple DMLs are expected), use partitioning, allocate correct memory depending on the use and of course have good server configuration, MySQL can handle data even in terabytes!
There are always ways to improve the database performance.
A: It depends on your query and validation.
For example, i worked with a table of 100 000 drugs which has a column generic name where it has more than 15 characters for each drug in that table .I put a query to compare the generic name of drugs between two tables.The query takes more minutes to run.The Same,if you compare the drugs using the drug index,using an id column (as said above), it takes only few seconds.
A: The database size does matter. If you have more than one table with more than a million records, then performance starts indeed to degrade. The number of records does of course affect the performance: MySQL can be slow with large tables. If you hit one million records you will get performance problems if the indices are not set right (for example no indices for fields in "WHERE statements" or "ON conditions" in joins). If you hit 10 million records, you will start to get performance problems even if you have all your indices right. Hardware upgrades - adding more memory and more processor power, especially memory - often help to reduce the most severe problems by increasing the performance again, at least to a certain degree. For example 37 signals went from 32 GB RAM to 128GB of RAM for the Basecamp database server.
A: I'm currently managing a MySQL database on Amazon's cloud infrastructure that has grown to 160 GB. Query performance is fine. What has become a nightmare is backups, restores, adding slaves, or anything else that deals with the whole dataset, or even DDL on large tables. Getting a clean import of a dump file has become problematic. In order to make the process stable enough to automate, various choices needed to be made to prioritize stability over performance. If we ever had to recover from a disaster using a SQL backup, we'd be down for days.
Horizontally scaling SQL is also pretty painful, and in most cases leads to using it in ways you probably did not intend when you chose to put your data in SQL in the first place. Shards, read slaves, multi-master, et al, they are all really shitty solutions that add complexity to everything you ever do with the DB, and not one of them solves the problem; only mitigates it in some ways. I would strongly suggest looking at moving some of your data out of MySQL (or really any SQL) when you start approaching a dataset of a size where these types of things become an issue.
Update: a few years later, and our dataset has grown to about 800 GiB. In addition, we have a single table which is 200+ GiB and a few others in the 50-100 GiB range. Everything I said before holds. It still performs just fine, but the problems of running full dataset operations have become worse.
A:
I would focus first on your indexes, than have a server admin look at your OS, and if all that doesn't help it might be time for a master/slave configuration.
That's true. Another thing that usually works is to just reduce the quantity of data that's repeatedly worked with. If you have "old data" and "new data" and 99% of your queries work with new data, just move all the old data to another table - and don't look at it ;)
-> Have a look at partitioning.
A: 2GB and about 15M records is a very small database - I've run much bigger ones on a pentium III(!) and everything has still run pretty fast.. If yours is slow it is a database/application design problem, not a mysql one.
A: The physical database size doesn't matter. The number of records don't matter.
In my experience the biggest problem that you are going to run in to is not size, but the number of queries you can handle at a time. Most likely you are going to have to move to a master/slave configuration so that the read queries can run against the slaves and the write queries run against the master. However if you are not ready for this yet, you can always tweak your indexes for the queries you are running to speed up the response times. Also there is a lot of tweaking you can do to the network stack and kernel in Linux that will help.
I have had mine get up to 10GB, with only a moderate number of connections and it handled the requests just fine.
I would focus first on your indexes, then have a server admin look at your OS, and if all that doesn't help it might be time to implement a master/slave configuration.
A: It's kind of pointless to talk about "database performance", "query performance" is a better term here. And the answer is: it depends on the query, data that it operates on, indexes, hardware, etc. You can get an idea of how many rows are going to be scanned and what indexes are going to be used with EXPLAIN syntax.
2GB does not really count as a "large" database - it's more of a medium size.
A: I once was called upon to look at a mysql that had "stopped working". I discovered that the DB files were residing on a Network Appliance filer mounted with NFS2 and with a maximum file size of 2GB. And sure enough, the table that had stopped accepting transactions was exactly 2GB on disk. But with regards to the performance curve I'm told that it was working like a champ right up until it didn't work at all! This experience always serves for me as a nice reminder that there're always dimensions above and below the one you naturally suspect.
A: In general this is a very subtle issue and not trivial whatsoever. I encourage you to read mysqlperformanceblog.com and High Performance MySQL. I really think there is no general answer for this.
I'm working on a project which has a MySQL database with almost 1TB of data. The most important scalability factor is RAM. If the indexes of your tables fit into memory and your queries are highly optimized, you can serve a reasonable amount of requests with a average machine.
The number of records do matter, depending of how your tables look like. It's a difference to have a lot of varchar fields or only a couple of ints or longs.
The physical size of the database matters as well: think of backups, for instance. Depending on your engine, your physical db files on grow, but don't shrink, for instance with innodb. So deleting a lot of rows, doesn't help to shrink your physical files.
There's a lot to this issues and as in a lot of cases the devil is in the details.
A: Database size DOES matter in terms of bytes and table's rows number. You will notice a huge performance difference between a light database and a blob filled one. Once my application got stuck because I put binary images inside fields instead of keeping images in files on the disk and putting only file names in database. Iterating a large number of rows on the other hand is not for free.
A: No it doesnt really matter. The MySQL speed is about 7 Million rows per second. So you can scale it quite a bit
A: Query performance mainly depends on the number of records it needs to scan, indexes plays a high role in it and index data size is proportional to number of rows and number of indexes.
Queries with indexed field conditions along with full value would be returned in 1ms generally, but starts_with, IN, Between, obviously contains conditions might take more time with more records to scan.
Also you will face lot of maintenance issues with DDL, like ALTER, DROP will be slow and difficult with more live traffic even for adding a index or new columns.
Generally its advisable to cluster the Database into as many clusters as required (500GB would be a general benchmark, as said by others it depends on many factors and can vary based on use cases) that way it gives better isolation and gives independence to scale specific clusters (more suited in case of B2B)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "334"
} |
Q: Catching SQL Injection and other Malicious Web Requests I am looking for a tool that can detect malicious requests (such as obvious SQL injection gets or posts) and will immediately ban the IP address of the requester/add to a blacklist. I am aware that in an ideal world our code should be able to handle such requests and treat them accordingly, but there is a lot of value in such a tool even when the site is safe from these kinds of attacks, as it can lead to saving bandwidth, preventing bloat of analytics, etc.
Ideally, I'm looking for a cross-platform (LAMP/.NET) solution that sits at a higher level than the technology stack; perhaps at the web-server or hardware level. I'm not sure if this exists, though.
Either way, I'd like to hear the community's feedback so that I can see what my options might be with regard to implementation and approach.
A: The problem with a generic tool is that it is very difficult to come up with a set of rules that will only match against a genuine attack.
SQL keywords are all English words, and don't forget that the string
DROP TABLE users;
is perfectly valid in a form field that, for example, contains an answer to a programming question.
The only sensible option is to sanitise the input before ever passing it to your database but pass it on nonetheless. Otherwise lots of perfectly normal, non-malicious users are going to get banned from your site.
A: One method that might work for some cases would be to take the sql string that would run if you naively used the form data and pass it to some code that counts the number of statements that would actually be executed. If it is greater than the number expected, then there is a decent chance that an injection was attempted, especially for fields that are unlikely to include control characters such as username.
Something like a normal text box would be a bit harder since this method would be a lot more likely to return false positives, but this would be a start, at least.
A: One little thing to keep in mind: In some countries (i.e. most of Europe), people do not have static IP Addresses, so blacklisting should not be forever.
A: Oracle has got an online tutorial about SQL Injection. Even though you want a ready-made solution, this might give you some hints on how to use it better to defend yourself.
A: Your almost looking at it the wrong way, no 3party tool that is not aware of your application methods/naming/data/domain is going to going to be able to perfectly protect you.
Something like SQL injection prevention is something that has to be in the code, and best written by the people that wrote the SQL, because they are the ones that will know what should/shouldnt be in those fields (unless your project has very good docs)
Your right, this all has been done before. You dont quite have to reinvent the wheel, but you do have to carve a new one because of a differences in everyone's axle diameters.
This is not a drop-in and run problem, you really do have to be familiar with what exactly SQL injection is before you can prevent it. It is a sneaky problem, so it takes equally sneaky protections.
These 2 links taught me far more then the basics on the subject to get started, and helped me better phrase my future lookups on specific questions that weren't answered.
*
*SQL injection
*SQL Injection Attacks by Example
And while this one isnt quite a 100% finder, it will "show you the light" on existing problem in your existing code, but like with webstandards, dont stop coding once you pass this test.
*
*Exploit-Me
A: Now that I think about it, a Bayesian filter similar to the ones used to block spam might work decently too. If you got together a set of normal text for each field and a set of sql injections, you might be able to train it to flag injection attacks.
A: One of my sites was recently hacked through SQL Injection. It added a link to a virus for every text field in the db! The fix was to add some code looking for SQL keywords. Fortunately, I've developed in ColdFiusion, so the code sits in my Application.cfm file which is run at the beginning of every webpage & it looks at all the URL variables. Wikipedia has some good links to help too.
A: Interesting how this is being implemented years later by google and them removing the URL all together in order to prevent XSS attacks and other malicious acitivites
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: Limit size of Queue in .NET? I have a Queue<T> object that I have initialised to a capacity of 2, but obviously that is just the capacity and it keeps expanding as I add items. Is there already an object that automatically dequeues an item when the limit is reached, or is the best solution to create my own inherited class?
A: Well I hope this class will helps You:
Internally the Circular FIFO Buffer use a Queue<T> with the specified size.
Once the size of the buffer is reached, it will replaces older items with new ones.
NOTE: You can't remove items randomly. I set the method Remove(T item) to return false.
if You want You can modify to remove items randomly
public class CircularFIFO<T> : ICollection<T> , IDisposable
{
public Queue<T> CircularBuffer;
/// <summary>
/// The default initial capacity.
/// </summary>
private int capacity = 32;
/// <summary>
/// Gets the actual capacity of the FIFO.
/// </summary>
public int Capacity
{
get { return capacity; }
}
/// <summary>
/// Initialize a new instance of FIFO class that is empty and has the default initial capacity.
/// </summary>
public CircularFIFO()
{
CircularBuffer = new Queue<T>();
}
/// <summary>
/// Initialize a new instance of FIFO class that is empty and has the specified initial capacity.
/// </summary>
/// <param name="size"> Initial capacity of the FIFO. </param>
public CircularFIFO(int size)
{
capacity = size;
CircularBuffer = new Queue<T>(capacity);
}
/// <summary>
/// Adds an item to the end of the FIFO.
/// </summary>
/// <param name="item"> The item to add to the end of the FIFO. </param>
public void Add(T item)
{
if (this.Count >= this.Capacity)
Remove();
CircularBuffer.Enqueue(item);
}
/// <summary>
/// Adds array of items to the end of the FIFO.
/// </summary>
/// <param name="item"> The array of items to add to the end of the FIFO. </param>
public void Add(T[] item)
{
int enqueuedSize = 0;
int remainEnqueueSize = this.Capacity - this.Count;
for (; (enqueuedSize < item.Length && enqueuedSize < remainEnqueueSize); enqueuedSize++)
CircularBuffer.Enqueue(item[enqueuedSize]);
if ((item.Length - enqueuedSize) != 0)
{
Remove((item.Length - enqueuedSize));//remaining item size
for (; enqueuedSize < item.Length; enqueuedSize++)
CircularBuffer.Enqueue(item[enqueuedSize]);
}
}
/// <summary>
/// Removes and Returns an item from the FIFO.
/// </summary>
/// <returns> Item removed. </returns>
public T Remove()
{
T removedItem = CircularBuffer.Peek();
CircularBuffer.Dequeue();
return removedItem;
}
/// <summary>
/// Removes and Returns the array of items form the FIFO.
/// </summary>
/// <param name="size"> The size of item to be removed from the FIFO. </param>
/// <returns> Removed array of items </returns>
public T[] Remove(int size)
{
if (size > CircularBuffer.Count)
size = CircularBuffer.Count;
T[] removedItems = new T[size];
for (int i = 0; i < size; i++)
{
removedItems[i] = CircularBuffer.Peek();
CircularBuffer.Dequeue();
}
return removedItems;
}
/// <summary>
/// Returns the item at the beginning of the FIFO with out removing it.
/// </summary>
/// <returns> Item Peeked. </returns>
public T Peek()
{
return CircularBuffer.Peek();
}
/// <summary>
/// Returns the array of item at the beginning of the FIFO with out removing it.
/// </summary>
/// <param name="size"> The size of the array items. </param>
/// <returns> Array of peeked items. </returns>
public T[] Peek(int size)
{
T[] arrayItems = new T[CircularBuffer.Count];
CircularBuffer.CopyTo(arrayItems, 0);
if (size > CircularBuffer.Count)
size = CircularBuffer.Count;
T[] peekedItems = new T[size];
Array.Copy(arrayItems, 0, peekedItems, 0, size);
return peekedItems;
}
/// <summary>
/// Gets the actual number of items presented in the FIFO.
/// </summary>
public int Count
{
get
{
return CircularBuffer.Count;
}
}
/// <summary>
/// Removes all the contents of the FIFO.
/// </summary>
public void Clear()
{
CircularBuffer.Clear();
}
/// <summary>
/// Resets and Initialize the instance of FIFO class that is empty and has the default initial capacity.
/// </summary>
public void Reset()
{
Dispose();
CircularBuffer = new Queue<T>(capacity);
}
#region ICollection<T> Members
/// <summary>
/// Determines whether an element is in the FIFO.
/// </summary>
/// <param name="item"> The item to locate in the FIFO. </param>
/// <returns></returns>
public bool Contains(T item)
{
return CircularBuffer.Contains(item);
}
/// <summary>
/// Copies the FIFO elements to an existing one-dimensional array.
/// </summary>
/// <param name="array"> The one-dimensional array that have at list a size of the FIFO </param>
/// <param name="arrayIndex"></param>
public void CopyTo(T[] array, int arrayIndex)
{
if (array.Length >= CircularBuffer.Count)
CircularBuffer.CopyTo(array, 0);
}
public bool IsReadOnly
{
get { return false; }
}
public bool Remove(T item)
{
return false;
}
#endregion
#region IEnumerable<T> Members
public IEnumerator<T> GetEnumerator()
{
return CircularBuffer.GetEnumerator();
}
#endregion
#region IEnumerable Members
IEnumerator IEnumerable.GetEnumerator()
{
return CircularBuffer.GetEnumerator();
}
#endregion
#region IDisposable Members
/// <summary>
/// Releases all the resource used by the FIFO.
/// </summary>
public void Dispose()
{
CircularBuffer.Clear();
CircularBuffer = null;
GC.Collect();
}
#endregion
}
A: Concurrent Solution
public class LimitedConcurrentQueue<ELEMENT> : ConcurrentQueue<ELEMENT>
{
public readonly int Limit;
public LimitedConcurrentQueue(int limit)
{
Limit = limit;
}
public new void Enqueue(ELEMENT element)
{
base.Enqueue(element);
if (Count > Limit)
{
TryDequeue(out ELEMENT discard);
}
}
}
Note: Since Enqueue controls the addition of elements, and does so one at a time, there is no need to execute a while for TryDequeue.
A: You should create your own class, a ringbuffer would probably fit your needs.
The data structures in .NET that allows you to specify capacity, except for array, uses this to build the internal data structure used to hold the internal data.
For instance, for a list, capacity is used to size an internal array. When you start adding elements to the list, it'll start filling this array from index 0 and up, and when it reaches your capacity, it increases the capacity to a new higher capacity, and continues filling it up.
A: I've knocked up a basic version of what I'm looking for, it's not perfect but it'll do the job until something better comes along.
public class LimitedQueue<T> : Queue<T>
{
public int Limit { get; set; }
public LimitedQueue(int limit) : base(limit)
{
Limit = limit;
}
public new void Enqueue(T item)
{
while (Count >= Limit)
{
Dequeue();
}
base.Enqueue(item);
}
}
A: Why wouldn't you just use an array with a size of 2? A Queue is supposed to be able to dynamically grow and shrink.
Or create a wrapper class around an instance of Queue<T> instance and each time one enqueues a <T> object, check the size of the queue. If larger than 2, dequeue the first item.
A: If it's of any use to anyone, I made a LimitedStack<T>.
public class LimitedStack<T>
{
public readonly int Limit;
private readonly List<T> _stack;
public LimitedStack(int limit = 32)
{
Limit = limit;
_stack = new List<T>(limit);
}
public void Push(T item)
{
if (_stack.Count == Limit) _stack.RemoveAt(0);
_stack.Add(item);
}
public T Peek()
{
return _stack[_stack.Count - 1];
}
public void Pop()
{
_stack.RemoveAt(_stack.Count - 1);
}
public int Count
{
get { return _stack.Count; }
}
}
It removes the oldest item (bottom of stack) when it gets too big.
(This question was the top Google result for "C# limit stack size")
A: I would recommend that you pull up the C5 Library. Unlike SCG (System.Collections.Generic), C5 is programmed to interface and designed to be subclassed. Most public methods are virtual and none of the classes are sealed. This way, you won't have to use that icky "new" keyword which wouldn't trigger if your LimitedQueue<T> were cast to a SCG.Queue<T>. With C5 and using close to the same code as you had before, you would derive from the CircularQueue<T>. The CircularQueue<T> actually implements both a stack and a queue, so you can get both options with a limit nearly for free. I've rewritten it below with some 3.5 constructs:
using C5;
public class LimitedQueue<T> : CircularQueue<T>
{
public int Limit { get; set; }
public LimitedQueue(int limit) : base(limit)
{
this.Limit = limit;
}
public override void Push(T item)
{
CheckLimit(false);
base.Push(item);
}
public override void Enqueue(T item)
{
CheckLimit(true);
base.Enqueue(item);
}
protected virtual void CheckLimit(bool enqueue)
{
while (this.Count >= this.Limit)
{
if (enqueue)
{
this.Dequeue();
}
else
{
this.Pop();
}
}
}
}
I think that this code should do exactly what you were looking for.
A: You can use a LinkedList<T> and add thread safety:
public class Buffer<T> : LinkedList<T>
{
private int capacity;
public Buffer(int capacity)
{
this.capacity = capacity;
}
public void Enqueue(T item)
{
// todo: add synchronization mechanism
if (Count == capacity) RemoveLast();
AddFirst(item);
}
public T Dequeue()
{
// todo: add synchronization mechanism
var last = Last.Value;
RemoveLast();
return last;
}
}
One thing to note is the default enumeration order will be LIFO in this example. But that can be overridden if necessary.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
} |
Q: Is nAnt still supported and suitable for .net 3.5/VS2008? I am using MSBuild to build my stuff. I want to use CruiseControl.net as by Build Server.
Now, CCNET refers nAnt a lot, but it looks as if ccnet can do most of the stuff nant could do through the project configuration and msbuild. Also, nAnt seems a bit unsupported, with a Beta release that is almost a year old now.
In short: I am actually quite happy with MSBuild (especially since it's the "official" compiler front end) and a bit uncomfortable with nAnt, but I do not want to judge prematurely.
What would be reasons to use nAnt over MSBuild? Especially with ccnet, which seems to overlap a bit with nant in terms of features (and adding the automated build related stuff)
A: In my opinion it is more a question of personal preference. nAnt is a great framework and MSBuild is almost as capable. With the ability to easily develop custom tasks (in both frameworks) you can accomplish almost anything that you need to do.
I cannot answer the "still supported" portion of your questions, but I would say if you are already comfortable with nAnt then it's probably viable. If you (or someone in your group) is familiar with MSBuild then that is a fine way to go as well.
A: If you've already got a bunch of custom tasks you use with nAnt, stick with it - you don't gain much with MSBuild. That said, there doesn't seem to be anything that nAnt can do that MSBuild can't at its core. Both can call external tools, both can run .Net-based custom tasks, and both have a bunch of community tasks out there.
We're using MSBuild here for the same reason you are - it's the default build system for VS now, and we didn't have any nAnt-specific stuff to worry about.
The MSBuildCommunityTasks are a good third-party task base to start with, and covers most of the custom stuff I ever did in nAnt, including VSS and Subversion support.
A: CC.NET is simply the build server technology, not the build script technology. We use CC.NET at work to very successfully call MSBuild build scripts with no problems.
NAnt is an older and more mature build scripting language, but they are both similar in how they work. There are very few things I could do in NAnt that I can't also do in MSBuild, so it really comes down to which one you are more comfortable with. As far as how active NAnt is, don't go by when the last release was...instead go by when the last nightly build was. NAnt tends to go a long time between releases, but the nightly builds are usually pretty stable.
A: If you are quite happy with MSBuild, then I would stick with MSBuild. This may be one of those cases where the tool you learn first is the one you will prefer. I started with NAnt and can't quite get used to MSBuild. I'm sure they will both be around for quite some time.
There are some fundamental differences between the two, probably best highlighted by this conversation between some NAnt fans and a Microsoftie.
Interestingly, Jeremy Miller asked the exact opposite question on his blog last year.
A: Honestly it depends on what fits in to your environment better. If you are using a lot of Non-Microsoft tools, nunit, ccnet, ncover. You will probably find better support with nant. Alternatively if you are using MSTest, TFSBuild, you will probably find MSBuild a better environment. I would learn both and use which every fits more smoothly with your environment.
A: Like what so many people have already indicated, the answer here is "it depends". There are some things like repeating operations that are much simpler and cleaner in NAnt. See the MSDN forums for a discussion about this.
A: I find that you can also use a hybrid approach too, especially in larger projects. A lot of our nant scripts are being converted to msbuild when new components are developed. Both support the same major features and can call each other if you find a task that is natively supported in one but not the other.
For new .NET development starting with MSBuild can save you a lot of time since it can run the solution files directly. Extending from the main compilation to perform other tasks (source control, deployment, etc) works quite well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: How to check for file lock? Is there any way to check whether a file is locked without using a try/catch block?
Right now, the only way I know of is to just open the file and catch any System.IO.IOException.
A: A variation of DixonD's excellent answer (above).
public static bool TryOpen(string path,
FileMode fileMode,
FileAccess fileAccess,
FileShare fileShare,
TimeSpan timeout,
out Stream stream)
{
var endTime = DateTime.Now + timeout;
while (DateTime.Now < endTime)
{
if (TryOpen(path, fileMode, fileAccess, fileShare, out stream))
return true;
}
stream = null;
return false;
}
public static bool TryOpen(string path,
FileMode fileMode,
FileAccess fileAccess,
FileShare fileShare,
out Stream stream)
{
try
{
stream = File.Open(path, fileMode, fileAccess, fileShare);
return true;
}
catch (IOException e)
{
if (!FileIsLocked(e))
throw;
stream = null;
return false;
}
}
private const uint HRFileLocked = 0x80070020;
private const uint HRPortionOfFileLocked = 0x80070021;
private static bool FileIsLocked(IOException ioException)
{
var errorCode = (uint)Marshal.GetHRForException(ioException);
return errorCode == HRFileLocked || errorCode == HRPortionOfFileLocked;
}
Usage:
private void Sample(string filePath)
{
Stream stream = null;
try
{
var timeOut = TimeSpan.FromSeconds(1);
if (!TryOpen(filePath,
FileMode.Open,
FileAccess.ReadWrite,
FileShare.ReadWrite,
timeOut,
out stream))
return;
// Use stream...
}
finally
{
if (stream != null)
stream.Close();
}
}
A: Here's a variation of DixonD's code that adds number of seconds to wait for file to unlock, and try again:
public bool IsFileLocked(string filePath, int secondsToWait)
{
bool isLocked = true;
int i = 0;
while (isLocked && ((i < secondsToWait) || (secondsToWait == 0)))
{
try
{
using (File.Open(filePath, FileMode.Open)) { }
return false;
}
catch (IOException e)
{
var errorCode = Marshal.GetHRForException(e) & ((1 << 16) - 1);
isLocked = errorCode == 32 || errorCode == 33;
i++;
if (secondsToWait !=0)
new System.Threading.ManualResetEvent(false).WaitOne(1000);
}
}
return isLocked;
}
if (!IsFileLocked(file, 10))
{
...
}
else
{
throw new Exception(...);
}
A: You could call LockFile via interop on the region of file you are interested in. This will not throw an exception, if it succeeds you will have a lock on that portion of the file (which is held by your process), that lock will be held until you call UnlockFile or your process dies.
A:
Then between the two lines, another process could easily lock the file, giving you the same problem you were trying to avoid to begin with: exceptions.
However, this way, you would know that the problem is temporary, and to retry later. (E.g., you could write a thread that, if encountering a lock while trying to write, keeps retrying until the lock is gone.)
The IOException, on the other hand, is not by itself specific enough that locking is the cause of the IO failure. There could be reasons that aren't temporary.
A: You can see if the file is locked by trying to read or lock it yourself first.
Please see my answer here for more information.
A: When I faced with a similar problem, I finished with the following code:
public class FileManager
{
private string _fileName;
private int _numberOfTries;
private int _timeIntervalBetweenTries;
private FileStream GetStream(FileAccess fileAccess)
{
var tries = 0;
while (true)
{
try
{
return File.Open(_fileName, FileMode.Open, fileAccess, Fileshare.None);
}
catch (IOException e)
{
if (!IsFileLocked(e))
throw;
if (++tries > _numberOfTries)
throw new MyCustomException("The file is locked too long: " + e.Message, e);
Thread.Sleep(_timeIntervalBetweenTries);
}
}
}
private static bool IsFileLocked(IOException exception)
{
int errorCode = Marshal.GetHRForException(exception) & ((1 << 16) - 1);
return errorCode == 32 || errorCode == 33;
}
// other code
}
A: The other answers rely on old information. This one provides a better solution.
Long ago it was impossible to reliably get the list of processes locking a file because Windows simply did not track that information. To support the Restart Manager API, that information is now tracked. The Restart Manager API is available beginning with Windows Vista and Windows Server 2008 (Restart Manager: Run-time Requirements).
I put together code that takes the path of a file and returns a List<Process> of all processes that are locking that file.
static public class FileUtil
{
[StructLayout(LayoutKind.Sequential)]
struct RM_UNIQUE_PROCESS
{
public int dwProcessId;
public System.Runtime.InteropServices.ComTypes.FILETIME ProcessStartTime;
}
const int RmRebootReasonNone = 0;
const int CCH_RM_MAX_APP_NAME = 255;
const int CCH_RM_MAX_SVC_NAME = 63;
enum RM_APP_TYPE
{
RmUnknownApp = 0,
RmMainWindow = 1,
RmOtherWindow = 2,
RmService = 3,
RmExplorer = 4,
RmConsole = 5,
RmCritical = 1000
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
struct RM_PROCESS_INFO
{
public RM_UNIQUE_PROCESS Process;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = CCH_RM_MAX_APP_NAME + 1)]
public string strAppName;
[MarshalAs(UnmanagedType.ByValTStr, SizeConst = CCH_RM_MAX_SVC_NAME + 1)]
public string strServiceShortName;
public RM_APP_TYPE ApplicationType;
public uint AppStatus;
public uint TSSessionId;
[MarshalAs(UnmanagedType.Bool)]
public bool bRestartable;
}
[DllImport("rstrtmgr.dll", CharSet = CharSet.Unicode)]
static extern int RmRegisterResources(uint pSessionHandle,
UInt32 nFiles,
string[] rgsFilenames,
UInt32 nApplications,
[In] RM_UNIQUE_PROCESS[] rgApplications,
UInt32 nServices,
string[] rgsServiceNames);
[DllImport("rstrtmgr.dll", CharSet = CharSet.Auto)]
static extern int RmStartSession(out uint pSessionHandle, int dwSessionFlags, string strSessionKey);
[DllImport("rstrtmgr.dll")]
static extern int RmEndSession(uint pSessionHandle);
[DllImport("rstrtmgr.dll")]
static extern int RmGetList(uint dwSessionHandle,
out uint pnProcInfoNeeded,
ref uint pnProcInfo,
[In, Out] RM_PROCESS_INFO[] rgAffectedApps,
ref uint lpdwRebootReasons);
/// <summary>
/// Find out what process(es) have a lock on the specified file.
/// </summary>
/// <param name="path">Path of the file.</param>
/// <returns>Processes locking the file</returns>
/// <remarks>See also:
/// http://msdn.microsoft.com/en-us/library/windows/desktop/aa373661(v=vs.85).aspx
/// http://wyupdate.googlecode.com/svn-history/r401/trunk/frmFilesInUse.cs (no copyright in code at time of viewing)
///
/// </remarks>
static public List<Process> WhoIsLocking(string path)
{
uint handle;
string key = Guid.NewGuid().ToString();
List<Process> processes = new List<Process>();
int res = RmStartSession(out handle, 0, key);
if (res != 0)
throw new Exception("Could not begin restart session. Unable to determine file locker.");
try
{
const int ERROR_MORE_DATA = 234;
uint pnProcInfoNeeded = 0,
pnProcInfo = 0,
lpdwRebootReasons = RmRebootReasonNone;
string[] resources = new string[] { path }; // Just checking on one resource.
res = RmRegisterResources(handle, (uint)resources.Length, resources, 0, null, 0, null);
if (res != 0)
throw new Exception("Could not register resource.");
//Note: there's a race condition here -- the first call to RmGetList() returns
// the total number of process. However, when we call RmGetList() again to get
// the actual processes this number may have increased.
res = RmGetList(handle, out pnProcInfoNeeded, ref pnProcInfo, null, ref lpdwRebootReasons);
if (res == ERROR_MORE_DATA)
{
// Create an array to store the process results
RM_PROCESS_INFO[] processInfo = new RM_PROCESS_INFO[pnProcInfoNeeded];
pnProcInfo = pnProcInfoNeeded;
// Get the list
res = RmGetList(handle, out pnProcInfoNeeded, ref pnProcInfo, processInfo, ref lpdwRebootReasons);
if (res == 0)
{
processes = new List<Process>((int)pnProcInfo);
// Enumerate all of the results and add them to the
// list to be returned
for (int i = 0; i < pnProcInfo; i++)
{
try
{
processes.Add(Process.GetProcessById(processInfo[i].Process.dwProcessId));
}
// catch the error -- in case the process is no longer running
catch (ArgumentException) { }
}
}
else
throw new Exception("Could not list processes locking resource.");
}
else if (res != 0)
throw new Exception("Could not list processes locking resource. Failed to get size of result.");
}
finally
{
RmEndSession(handle);
}
return processes;
}
}
UPDATE
Here is another discussion with sample code on how to use the Restart Manager API.
A: You can also check if any process is using this file and show a list of programs you must close to continue like an installer does.
public static string GetFileProcessName(string filePath)
{
Process[] procs = Process.GetProcesses();
string fileName = Path.GetFileName(filePath);
foreach (Process proc in procs)
{
if (proc.MainWindowHandle != new IntPtr(0) && !proc.HasExited)
{
ProcessModule[] arr = new ProcessModule[proc.Modules.Count];
foreach (ProcessModule pm in proc.Modules)
{
if (pm.ModuleName == fileName)
return proc.ProcessName;
}
}
}
return null;
}
A: Instead of using interop you can use the .NET FileStream class methods Lock and Unlock:
FileStream.Lock
http://msdn.microsoft.com/en-us/library/system.io.filestream.lock.aspx
FileStream.Unlock
http://msdn.microsoft.com/en-us/library/system.io.filestream.unlock.aspx
A: No, unfortunately, and if you think about it, that information would be worthless anyway since the file could become locked the very next second (read: short timespan).
Why specifically do you need to know if the file is locked anyway? Knowing that might give us some other way of giving you good advice.
If your code would look like this:
if not locked then
open and update file
Then between the two lines, another process could easily lock the file, giving you the same problem you were trying to avoid to begin with: exceptions.
A: Same thing but in Powershell
function Test-FileOpen
{
Param
([string]$FileToOpen)
try
{
$openFile =([system.io.file]::Open($FileToOpen,[system.io.filemode]::Open))
$open =$true
$openFile.close()
}
catch
{
$open = $false
}
$open
}
A: What I ended up doing is:
internal void LoadExternalData() {
FileStream file;
if (TryOpenRead("filepath/filename", 5, out file)) {
using (file)
using (StreamReader reader = new StreamReader(file)) {
// do something
}
}
}
internal bool TryOpenRead(string path, int timeout, out FileStream file) {
bool isLocked = true;
bool condition = true;
do {
try {
file = File.OpenRead(path);
return true;
}
catch (IOException e) {
var errorCode = Marshal.GetHRForException(e) & ((1 << 16) - 1);
isLocked = errorCode == 32 || errorCode == 33;
condition = (isLocked && timeout > 0);
if (condition) {
// we only wait if the file is locked. If the exception is of any other type, there's no point on keep trying. just return false and null;
timeout--;
new System.Threading.ManualResetEvent(false).WaitOne(1000);
}
}
}
while (condition);
file = null;
return false;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "262"
} |
Q: How do I use (n)curses in Ruby? I'd like to create a progress bar to indicate the status of an a batch job in Ruby.
I've read some tutorials / libraries on using (n)curses, none of which were particularly helpful in explaining how to create an "animated" progress bar in the terminal or using curses with Ruby.
I'm already aware of using a separate thread to monitor the progress of a given job, I'm just not sure how to proceed with drawing a progress bar.
Update
ProgressBar class was incredibly straight-forward, perfectly solved my problem.
A: Personally I think curses is overkill in this case. While the curses lib is nice (and I frequently use it myself) it's a PITA to relearn every time I haven't needed it for 12 months which has to be the sign of a bad interface design.
If for some reason you can't get on with the progress bar lib Joey suggested roll your own and release it under a pretty free licence for instant kudos :)
A: Very late answer and sorry for self promotion, but I created library to show progress in terminal.
A: You might be able to get some implementation ideas from the Ruby/ProgressBar library, which generates text progress bars. I stumbled across it a couple of months back but haven't made any use of it.
A: On windows, curses works out of the box, ncurses doesn't, and for a progress bar curses should be sufficient. So, use curses instead of ncurses.
Also, both curses and ncurses are wafer-thin wrappers around the c library - that means you don't really need Ruby-specific tutorials.
However, on the site for the PickAxe you can download all the code examples for the book. The file "ex1423.rb" contains a curses demo which plays Pong - that should give you plenty of material to get you going.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
} |
Q: Rockbox audio format How do you specify a callback for rb->pcm_play_data()?
A: The prototype for the callback function is as follows:
static void my_audio_callback(const void **start, size_t *size);
*start should be set to point to the region of memory where your PCM data is stored (16-bit signed integers), and *size should be the size of this region.
Once you've written your callback, call rb->pcm_play_data(), and enjoy the music!
rb->pcm_play_data(my_audio_callback, NULL, NULL, 0);
A very late edit: The format of the auto is 16-bit signed integer PCM, with stereo interleave (even indexes: left channel, odd: right).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Followup: Finding an accurate "distance" between colors Original Question
I am looking for a function that attempts to quantify how "distant" (or distinct) two colors are. This question is really in two parts:
*
*What color space best represents human vision?
*What distance metric in that space best represents human vision (euclidean?)
A: as cmetric.htm link above failed for me, as well as many other implementations for color distance I found (after a very long jurney..) how to calculate the best color distance, and .. most scientifically accurate one: deltaE and from 2 RGB (!) values using OpenCV:
This required 3 color space conversions + some code conversion from javascript (http://svn.int64.org/viewvc/int64/colors/colors.js) to C++
And finally the code (seems to work right out of the box, hope no one finds a serious bug there ... but it seems fine after a number of tests)
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/photo/photo.hpp>
#include <math.h>
using namespace cv;
using namespace std;
#define REF_X 95.047; // Observer= 2°, Illuminant= D65
#define REF_Y 100.000;
#define REF_Z 108.883;
void bgr2xyz( const Vec3b& BGR, Vec3d& XYZ );
void xyz2lab( const Vec3d& XYZ, Vec3d& Lab );
void lab2lch( const Vec3d& Lab, Vec3d& LCH );
double deltaE2000( const Vec3b& bgr1, const Vec3b& bgr2 );
double deltaE2000( const Vec3d& lch1, const Vec3d& lch2 );
void bgr2xyz( const Vec3b& BGR, Vec3d& XYZ )
{
double r = (double)BGR[2] / 255.0;
double g = (double)BGR[1] / 255.0;
double b = (double)BGR[0] / 255.0;
if( r > 0.04045 )
r = pow( ( r + 0.055 ) / 1.055, 2.4 );
else
r = r / 12.92;
if( g > 0.04045 )
g = pow( ( g + 0.055 ) / 1.055, 2.4 );
else
g = g / 12.92;
if( b > 0.04045 )
b = pow( ( b + 0.055 ) / 1.055, 2.4 );
else
b = b / 12.92;
r *= 100.0;
g *= 100.0;
b *= 100.0;
XYZ[0] = r * 0.4124 + g * 0.3576 + b * 0.1805;
XYZ[1] = r * 0.2126 + g * 0.7152 + b * 0.0722;
XYZ[2] = r * 0.0193 + g * 0.1192 + b * 0.9505;
}
void xyz2lab( const Vec3d& XYZ, Vec3d& Lab )
{
double x = XYZ[0] / REF_X;
double y = XYZ[1] / REF_X;
double z = XYZ[2] / REF_X;
if( x > 0.008856 )
x = pow( x , .3333333333 );
else
x = ( 7.787 * x ) + ( 16.0 / 116.0 );
if( y > 0.008856 )
y = pow( y , .3333333333 );
else
y = ( 7.787 * y ) + ( 16.0 / 116.0 );
if( z > 0.008856 )
z = pow( z , .3333333333 );
else
z = ( 7.787 * z ) + ( 16.0 / 116.0 );
Lab[0] = ( 116.0 * y ) - 16.0;
Lab[1] = 500.0 * ( x - y );
Lab[2] = 200.0 * ( y - z );
}
void lab2lch( const Vec3d& Lab, Vec3d& LCH )
{
LCH[0] = Lab[0];
LCH[1] = sqrt( ( Lab[1] * Lab[1] ) + ( Lab[2] * Lab[2] ) );
LCH[2] = atan2( Lab[2], Lab[1] );
}
double deltaE2000( const Vec3b& bgr1, const Vec3b& bgr2 )
{
Vec3d xyz1, xyz2, lab1, lab2, lch1, lch2;
bgr2xyz( bgr1, xyz1 );
bgr2xyz( bgr2, xyz2 );
xyz2lab( xyz1, lab1 );
xyz2lab( xyz2, lab2 );
lab2lch( lab1, lch1 );
lab2lch( lab2, lch2 );
return deltaE2000( lch1, lch2 );
}
double deltaE2000( const Vec3d& lch1, const Vec3d& lch2 )
{
double avg_L = ( lch1[0] + lch2[0] ) * 0.5;
double delta_L = lch2[0] - lch1[0];
double avg_C = ( lch1[1] + lch2[1] ) * 0.5;
double delta_C = lch1[1] - lch2[1];
double avg_H = ( lch1[2] + lch2[2] ) * 0.5;
if( fabs( lch1[2] - lch2[2] ) > CV_PI )
avg_H += CV_PI;
double delta_H = lch2[2] - lch1[2];
if( fabs( delta_H ) > CV_PI )
{
if( lch2[2] <= lch1[2] )
delta_H += CV_PI * 2.0;
else
delta_H -= CV_PI * 2.0;
}
delta_H = sqrt( lch1[1] * lch2[1] ) * sin( delta_H ) * 2.0;
double T = 1.0 -
0.17 * cos( avg_H - CV_PI / 6.0 ) +
0.24 * cos( avg_H * 2.0 ) +
0.32 * cos( avg_H * 3.0 + CV_PI / 30.0 ) -
0.20 * cos( avg_H * 4.0 - CV_PI * 7.0 / 20.0 );
double SL = avg_L - 50.0;
SL *= SL;
SL = SL * 0.015 / sqrt( SL + 20.0 ) + 1.0;
double SC = avg_C * 0.045 + 1.0;
double SH = avg_C * T * 0.015 + 1.0;
double delta_Theta = avg_H / 25.0 - CV_PI * 11.0 / 180.0;
delta_Theta = exp( delta_Theta * -delta_Theta ) * ( CV_PI / 6.0 );
double RT = pow( avg_C, 7.0 );
RT = sqrt( RT / ( RT + 6103515625.0 ) ) * sin( delta_Theta ) * -2.0; // 6103515625 = 25^7
delta_L /= SL;
delta_C /= SC;
delta_H /= SH;
return sqrt( delta_L * delta_L + delta_C * delta_C + delta_H * delta_H + RT * delta_C * delta_H );
}
Hope it helps someone :)
A: HSL and HSV are better for human color perception. According to Wikipedia:
It is sometimes preferable in working with art materials, digitized images, or other media, to use the HSV or HSL color model over alternative models such as RGB or CMYK, because of differences in the ways the models emulate how humans perceive color. RGB and CMYK are additive and subtractive models, respectively, modelling the way that primary color lights or pigments (respectively) combine to form new colors when mixed.
A: Convert to La*b* (aka just plain "Lab", and you'll also see reference to "CIELAB"). A good quick measaure of color difference is
(L1-L2)^2 + (a1-a2)^2 + (b1-b2)^2
Color scientists have other more refined measures, which may not be worth the bother, depending on accuracy needed for what you're doing.
The a and b values represent opposing colors in a way similar to how cones work, and may be negative or positive. Neutral colors - white, grays are a=0,b=0. The L is brightness defined in a particular way, from zero (pure darkness) up to whatever.
Crude explanation :>> Given a color, our eyes distinguish between two broad ranges of wavelength - blue vs longer wavelengths. and then, thanks to a more recent genetic mutation, the longer wavelength cones bifurcated into two, distinguishing for us red vs. green.
By the way, it'll be great for your career to rise above your color caveman collegues who know of only "RGB" or "CMYK" which are great for devices but suck for serious perception work. I've worked for imaging scientists who didn't know a thing about this stuff!
For more fun reading on color difference theory, try:
*
*http://white.stanford.edu/~brian/scielab/introduction.html and info
*and links on color theory in general, websurf starting with http://www.efg2.com/Lab/Library/Color/ and
*http://www.poynton.com/Poynton-color.html
More detail on Lab at http://en.kioskea.net/video/cie-lab.php3 I can't at this time find a non-ugly page that actually had the conversion formulas but I'm sure someone will edit this answer to include one.
A: The easiest distance would of course be to just consider the colors as 3d vectors originating from the same origin, and taking the distance between their end points.
If you need to consider such factors that green is more prominent in judging intensity, you can weigh the values.
ImageMagic provides the following scales:
*
*red: 0.3
*green: 0.6
*blue: 0.1
Of course, values like this would only be meaningful in relation to other values for other colors, not as something that would be meaningful to humans, so all you could use the values for would be similiarity ordering.
A: Well, as a first point of call, I'd say of the common metrics HSV (Hue, Saturation and Value) or HSL are better representative of how humans perceive colour than say RGB or CYMK. See HSL, HSV on Wikipedia.
I suppose naively I would plot the points in the HSL space for the two colours and calculate the magnitude of the difference vector. However this would mean that bright yellow and bright green would be considered just as different as green to dark green. But then many consider red and pink two different colours.
Moreover, difference vectors in the same direction in this parameter space are not equal. For instance, the human eye picks up green much better than other colours. A shift in hue from green by the same amount as a shift from red may seem greater. Also a shift in saturation from a small amount to zero is the difference between grey and pink, elsewhere the shift would be the difference between two shades of red.
From a programmers point of view, you would need to plot the difference vectors but modified by a proportionality matrix that would adjust the lengths accordingly in various regions of the HSL space - this would be fairly arbitrary and would be based on various colour theory ideas but be tweaked fairly arbitrarily depending on what you wanted to apply this to.
Even better, you could see if anyone has already done such a thing online...
A: The Wikipedia article on color differences lists a number of color spaces and distance metrics designed to agree with human perception of color distances.
A: As someone who is color blind I believe it is good to try to add more separation then normal vision. The most common form of color blindness is red/green deficiency. It doesn't mean that you can't see red or green, it means that it is more difficult to see and more difficult to see the differences. So it takes a larger separation before a color blind person can tell the difference.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
} |
Q: Using MSTest with CruiseControl.NET We have been using CruiseControl for quite a while with NUnit and NAnt. For a recent project we decided to use the testing framework that comes with Visual Studio, which so far has been adequate.
I'm attempting to get the solution running in CruiseControl. I've finally got the build itself to work; however, I have been unable to get any tests to show up in the CruiseControl interface despite adding custom build tasks and components designed to do just that. Does anyone have a definitive link out there to instructions on getting this set up?
A: Not sure if that helps (i found the ccnet Documentation somewhat unhelpful at times):
Using CruiseControl.NET with MSTest
A: The CC.Net interface is generated via an XSL transform on your XML files put together as specified in the ccnet.config file for your projects. The XSL is already written for things like FxCop - check your server's CC xsl directory for examples - shouldn't be too hard to write your own to add in the info - just remember to add the XML output from your tests into the main log.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Followup: "Sorting" colors by distinctiveness Original Question
If you are given N maximally distant colors (and some associated distance metric), can you come up with a way to sort those colors into some order such that the first M are also reasonably close to being a maximally distinct set?
In other words, given a bunch of distinct colors, come up with an ordering so I can use as many colors as I need starting at the beginning and be reasonably assured that they are all distinct and that nearby colors are also very distinct (e.g., bluish red isn't next to reddish blue).
Randomizing is OK but certainly not optimal.
Clarification: Given some large and visually distinct set of colors (say 256, or 1024), I want to sort them such that when I use the first, say, 16 of them that I get a relatively visually distinct subset of colors. This is equivalent, roughly, to saying I want to sort this list of 1024 so that the closer individual colors are visually, the farther apart they are on the list.
A: This also sounds to me like some kind of resistance graph where you try to map out the path of least resistance. If you inverse the requirements, path of maximum resistance, it could perhaps be used to produce a set that from the start produces maximum difference as you go, and towards the end starts to go back to values closer to the others.
For instance, here's one way to perhaps do what you want.
*
*Calculate the distance (ref your other post) from each color to all other colors
*Sum the distances for each color, this gives you an indication for how far away this color is from all other colors in total
*Order the list by distance, going down
This would, it seems, produce a list that starts with the color that is farthest away from all other colors, and then go down, colors towards the end of the list would be closer to other colors in general.
Edit: Reading your reply to my first post, about the spatial subdivision, would not exactly fit the above description, since colors close to other colors would fall to the bottom of the list, but let's say you have a cluster of colors somewhere, at least one of the colors from that cluster would be located near the start of the list, and it would be the one that generally was farthest away from all other colors in total. If that makes sense.
A: This problem is called color quantization, and has many well known algorithms: http://en.wikipedia.org/wiki/Color_quantization I know people who implemented the octree approach to good effect.
A: It seems perception is important to you, in that case you might want to consider working with a perceptual color space such as YUV, YCbCr or Lab. Everytime I've used those, they have given me much better results than sRGB alone.
Converting to and from sRGB can be a pain but in your case it could actually make the algorithm simpler and as a bonus it will mostly work for color blinds too!
A: N maximally distant colors can be considered a set of well-distributed points in a 3-dimensional (color) space. If you can generate them from a Halton sequence, then any prefix (the first M colors) also consists of well-distributed points.
A: If I'm understanding the question correctly, you wish to obtain the subset of M colours with the highest mean distance between colours, given some distance function d.
Put another way, considering the initial set of N colours as a large, undirected graph in which all colours are connected, you want to find the longest path that visits any M nodes.
Solving NP-complete graph problems is way beyond me I'm afraid, but you could try running a simple physical simulation:
*
*Generate M random points in colour space
*Calculate the distance between each point
*Calculate repulsion vectors for each point that will move it away from all other points (using 1 / (distance ^ 2) as the magnitude of the vector)
*Sum the repulsion vectors for each point
*Update the position of each point according to the summed repulsion vectors
*Constrain any out of bound coordinates (such as luminosity going negative or above one)
*Repeat from step 2 until the points stabilise
*For each point, select the nearest colour from the original set of N
It's far from efficient, but for small M it may be efficient enough, and it will give near optimal results.
If your colour distance function is simple, there may be a more deterministic way of generating the optimal subset.
A: *
*Start with two lists. CandidateColors, which initially contains your distinct colors and SortedColors, which is initially empty.
*Pick any color and remove it from CandidateColors and put it into SortedColors. This is the first color and will be the most common, so it's a good place to pick a color that jives well with your application.
*For each color in CandidateColors calculate its total distance. The total distance is the sum of the distance from the CandidateColor to each of the colors in SortedColors.
*Remove the color with the largest total distance from CandidateColors and add it to the end of SortedColors.
*If CandidateColors is not empty, go back to step 3.
This greedy algorithm should give you good results.
A: You could just sort the candidate colors based on the maximum-distanced of the minimum-distance to any of the index colors.
Using Euclidean color distance:
public double colordistance(Color color0, Color color1) {
int c0 = color0.getRGB();
int c1 = color1.getRGB();
return distance(((c0>>16)&0xFF), ((c0>>8)&0xFF), (c0&0xFF), ((c1>>16)&0xFF), ((c1>>8)&0xFF), (c1&0xFF));
}
public double distance(int r1, int g1, int b1, int r2, int g2, int b2) {
int dr = (r1 - r2);
int dg = (g1 - g2);
int db = (b1 - b2);
return Math.sqrt(dr * dr + dg * dg + db * db);
}
Though you can replace it with anything you want. It just needs a color distance routine.
public void colordistancesort(Color[] candidateColors, Color[] indexColors) {
double current;
double distance[] = new double[candidateColors.length];
for (int j = 0; j < candidateColors.length; j++) {
distance[j] = -1;
for (int k = 0; k < indexColors.length; k++) {
current = colordistance(indexColors[k], candidateColors[j]);
if ((distance[j] == -1) || (current < distance[j])) {
distance[j] = current;
}
}
}
//just sorts.
for (int j = 0; j < candidateColors.length; j++) {
for (int k = j + 1; k < candidateColors.length; k++) {
if (distance[j] > distance[k]) {
double d = distance[k];
distance[k] = distance[j];
distance[j] = d;
Color m = candidateColors[k];
candidateColors[k] = candidateColors[j];
candidateColors[j] = m;
}
}
}
}
A: Do you mean that from a set of N colors, you need to pick M colors, where M < N, such that M is the best representation of the N colors in the M space?
As a better example, reduce a true-color (24 bit color space) to a 8-bit mapped color space (GIF?).
There are quantization algorithms for this, like the Adaptive Spatial Subdivision algorithm used by ImageMagic.
These algorithms usually don't just pick existing colors from the source space but creates new colors in the target space that most closely resemble the source colors. As a simplified example, if you have 3 colors in the original image where two are red (with different intensity or bluish tints etc.) and the third is blue, and need to reduce to two colors, the target image could have a red color that is some kind of average of the original two red + the blue color from the original image.
If you need something else then I didn't understand your question :)
A: You can split them in to RGB HEX format so that you can compare the R with R's of a different color, same with the G and B.
Same format as HTML
XX XX XX
RR GG BB
00 00 00 = black
ff ff ff = white
ff 00 00 = red
00 ff 00 = green
00 00 ff = blue
So the only thing you would need to decide is how close you want the colors and what is an acceptable difference for the segments to be considered different.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: What is a better file copy alternative than the Windows default? I need to copy hundreds of gigs of random files around on my computer and am pretty leery of using the vanilla file copy built into Windows.
I don't want it to hang on a "Are you sure?", "Are you really sure?", "Even zip files?", "Surely not read-only files too!" loop as soon as I step away.
I don't want it to work for hours and then stop unexpectedly: "Someone once opened this file and so I won't copy it!" and then cancel the whole copy or just quit with no indication of what was done and what work remains.
What file management programs do you have experience with? Which do you recommend?
This question is related to my other question: How can I use an old PATA hard disk drive on my newer SATA-only computer?
A: You really need to use a file Sync tool, like SyncBackSE, MS SyncToy, or even something like WinMerge will do the trick.
I prefer SyncBack as it allows you to set up very explicit rules for just about every possible case and conflict, at least more so than the other two.
With any of these you won't have to keep clicking all the pop-ups and you can verify, without a doubt, that the destination is exactly the same as the source.
A: You can try SuperCopier, it replaces the standard Windows copy mechanism while loaded.
It can retry failed files at the end, resume a canceled copy (even a copy canceled by Windows), accepts "All" for every answers. You can even answer the annoying questions (file already exists, error copying file) before they occur.
A: Use Robocopy (Robust File Copy).
NOTE:
In Windows Vista and Server 2008 when you type:
xcopy /?
you get:
NOTE: Xcopy is now deprecated, please use Robocopy.
So start getting used to robocopy :)
A: Big thumbs up for robocopy. I use it for doing the sort of things you mention.
For example I'm currently running 5 robocopy sessions on my server where I'm copying about 60GB of files between 3 remote servers, I'm connected to two via a CheckPoint VPN and the other is an Amazon S3 space mapped via JungleDisk.
I'm working with a colleague at the other end of the country. He'll log in to the same servers later tonight and run a similar set of robocopy batch files to download all the changes I'm currently uploading.
The 'killer app' feature is that robocopy will retain file date/time stamps and, by default will ONLY copy files that are different. So you can point it at a huge dir tree and only changed files will be copied.
Here's some useful tips for doing this sort of thing...
/MIR mirrors a dir tree so will delete as well as add
/R:10 tells robocopy to try 10 times to copy the file before giving up. The default is 1,000,000 times
/LOG+somefilename.log will append the screen output to somefilename.log, creating it if necessary.
/XD dir1 dir2 will ignore any dirs named dir1 or dir2 in the copy. Wildcards can be used.
/FFT will use FAT time stamps which are less accurate than NTFS (uses a 2 sec granularity in timestamps). I also find this one useful when copying between Linux file systems and NTFS.
I typically use something like
robocopy d:\workdir y:\workdir /TEE /LOG+:d:\update.log /MIR /R:5
which will mirror (/MIR) d:\workdir with y:\workdir, append a log of what it does to d:\update.log (/LOG+d:\update.log) writing output to both the console and the log file (/TEE), and try each file 5 times before moving on to the next one.
It also works with UNC paths.
If you've got a large collection of files that need syncing over a number of PCs then robocopy is your friend.
A: It sounds like a backup-style tool may be what you're looking for.
I've been using SyncBack (one of the versions is free). You could also try out MS SyncToy which tries to make moving, copying, syncing, etc. easy.
If you really do copy just random files at random times, you could try Total Copy which has the added benefit of working well over a network (pause, resume, etc.).
A: Use Robocopy, it has the ability to copy files in "restartable mode", plus it should respect the file attributes. And it comes with Vista and Server 2008, and you can download it for older OS's. Plus you can set it to retry on failed copies, to pick up files that are temporarily in use by another process.
A: Besides XCOPY, RoboCopy and TeraCopy that have already been suggested, you may also try out Total Commander.
A: How about good old Command-Line Xcopy? With S: being the source and T: the target:
xcopy /K /R /E /I /S /C /H /G /X /Y s:\*.* t:\
/K Copies attributes. Normal Xcopy will reset read-only attributes.
/R Overwrites read-only files.
/E Copies directories and subdirectories, including empty ones.
/I If destination does not exist and copying more than one file, assumes that destination must be a directory.
/S Copies directories and subdirectories except empty ones.
/C Continues copying even if errors occur.
/H Copies hidden and system files also.
/Y Suppresses prompting to confirm you want to overwrite an existing destination file.
/G Allows the copying of encrypted files to destination that does not support encryption.
/X Copies file audit settings (implies /O).
(Edit: Added /G and /X which are new since a few years)
A: Powershell scripts might be useful too and surely more flexible than xcopy and other DOS commands. You can easily recurse through sub-directories, filter your files by name or extensions, treat especially some particular files based on the criteria of your choice, etc. The Powershell community web site is a good starting point.
A: I've tried out Copy Handler and it works very well. It has some cool features where you can control buffering depending on the type of media and with file queuing support so you can setup your copy and move operations and forget about them and minimize disk fragmentation at the same time. So it won't copy multiple file simultaneously from a single CD or DVD as it would make the drive seek too much.
Best of all its Open Source.
A: You can try TeraCopy or RoboCopy.
A: I would definitely prefer:
1) Teracopy - GUI based, replaces the default Windows copy/move UI and adds itself to context menu. Basic version is free (for home use I guess).
2) Robocopy - CLI based, useful when scripting. Free tool from MS and is included in Vista/Windows 2008. MS Technet has a GUI for robocopy as well - useful to create statements that you can later embed in scripts or on the command prompt.
PS: I know these have been already suggested here and I would have voted on them, if I could.
A: Xcopy keeps the Date Modified, only the Date Created and Date Accessed will change.
(tested on XP Pro, try it on a small folder to check if you're using Vista as I did not test it under Vista)
Edit: You MAY want to redirect the Output though:
xcopy /K /R ....... s:\*.* t:\ >c:\xcopy.log 2>&1
That way, if files fail to copy you can check the log (i.e. System Volume Information will generate an error, but that folder does not matter anyway for what you're trying to do)
A: I've been using Copy Handler. The nicest thing about it is that it queues up its jobs like a download manager. It has a shell extension so you can either rightclick drag, or just set copy with copyhandler as the default action.
A: I built myself a PC with 4GB RAM, dual core 1.8GHz 40GB PATA drive primary, and 250GB SATA drive secondary, and installed Windows Vista Business Edition. When I had to copy 120GB of data from my old PATA disk, Vista failed miserably and kept crashing. I definitely recommend Teracopy Free Edition.
A: Besides the already mentioned Robocopy, XXCOPY has a free version. Its syntax is backwards compatible with XCOPY, but has tons of additional options (XXCOPY /HELP > x create a 42kb file with all the options available). For instance, you can delete files with it, include or exclude a list of directories for the copy, use it as a "touch" utility, etc.
I've been using it for years, it's 2 thumbs up.
A: ZTreeWin It's a 32 bit text-mode, tree-structured file/directory manager for Windows. Very easy to use, there is a menu but this also shows the keys for various commands. Easy to navigate around the file system and it has a has split pane mode so you can work with both source and target easily, with only ever a few keystrokes. It is far more effective for getting things done than Windows Explorer or Xcopy.
A: I've tried KillCopy 2.85 and I can say only one - this is a powerful copy software which can replace a windows file copy on 100%. May be the best from alternatives that i've tested for now. File transfer is very fast. KillCopy is the fastest software and can copy files with 40 MB/s.
Reasons for my choise is simple - KillCopy works fine on all Windows platforms without mean
whats is architecture - 32 or 64 bits.
A: A GUI front-end for xcopy is available at: http://lorenstuff.weebly.com/ (free)
controls are:input, output, set switches & run. Not a replacement or an improvement on xcopy, just a GUI to simplify operation.
A: Copywhiz program (commercial) seems to solve the exact problems you listed.
A: Xcopy [source] [destination] /e /c /h /o /d
Copies eveverything that has not previously been copied. Essentially works as restartable since you can just press up and enter and it will commence where it was up to when you stoped it or it lost connection. Does not copy files that have already been copied and preserves onwership and attributes.
It also ignores errors so if ti can't copy something it just keeps going.
I remeber it because its xcopy echo(e)d
A: Reboot into Linux, mount the drive, and use GNU cp.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: Get the current logged in OS user in Adobe Air I need the name of the current logged in user in my Air/Flex application. The application will only be deployed on Windows machines. I think I could attain this by regexing the User directory, but am open to other ways.
A: Also I would try:
File.userDirectory.name
But I don't have Air installed so I can't really test this...
A: This isn't the prettiest approach, but if you know your AIR app will only be run in a Windows environment it works well enough:
public var username:String;
public function getCurrentOSUser():void
{
var nativeProcessStartupInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
var file:File = new File("C:/WINDOWS/system32/whoami.exe");
nativeProcessStartupInfo.executable = file;
process = new NativeProcess();
process.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, onOutputData);
process.start(nativeProcessStartupInfo);
}
public function onOutputData(event:ProgressEvent):void
{
var output:String = process.standardOutput.readUTFBytes(process.standardOutput.bytesAvailable);
this.username = output.split('\\')[1];
trace("Got username: ", this.username);
}
A: There's a couple of small cleanups you can make...
package
{
import flash.filesystem.File;
public class UserUtil
{
public static function get currentOSUser():String
{
var userDir:String = File.userDirectory.nativePath;
var userName:String = userDir.substr(userDir.lastIndexOf(File.separator) + 1);
return userName;
}
}
}
As Kevin suggested, use File.separator to make the directory splitting cross-platform (just tested on Windows and Mac OS X).
You don't need to use resolvePath("") unless you're looking for a child.
Also, making the function a proper getter allows binding without any further work.
In the above example I put it into a UserUtil class, now I can bind to UserUtil.currentOSUser, e.g:
<?xml version="1.0" encoding="utf-8"?>
<mx:WindowedApplication xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">
<mx:Label text="{UserUtil.currentOSUser}"/>
</mx:WindowedApplication>
A: Here is a solution that works in XP / Vista, but is definitely expandable to OSX, linux, I'd still be interested in another way.
public static function GetCurrentOSUser():String{
// XP & Vista only.
var userDirectory:String = File.userDirectory.resolvePath("").nativePath;
var startIndex:Number = userDirectory.lastIndexOf("\\") + 1
var stopIndex:Number = userDirectory.length;
var user = userDirectory.substring(startIndex, stopIndex);
return user;
}
A: Update way later: there's actually a built in function to get the current user. I think it's in nativeApplication.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: What is unit testing? I saw many questions asking 'how' to unit test in a specific language, but no question asking 'what', 'why', and 'when'.
*
*What is it?
*What does it do for me?
*Why should I use it?
*When should I use it (also when not)?
*What are some common pitfalls and misconceptions
A: I don't disagree with Dan (although a better choice may just be not to answer)...but...
Unit testing is the process of writing code to test the behavior and functionality of your system.
Obviously tests improve the quality of your code, but that's just a superficial benefit of unit testing. The real benefits are to:
*
*Make it easier to change the technical implementation while making sure you don't change the behavior (refactoring). Properly unit tested code can be aggressively refactored/cleaned up with little chance of breaking anything without noticing it.
*Give developers confidence when adding behavior or making fixes.
*Document your code
*Indicate areas of your code that are tightly coupled. It's hard to unit test code that's tightly coupled
*Provide a means to use your API and look for difficulties early on
*Indicates methods and classes that aren't very cohesive
You should unit test because its in your interest to deliver a maintainable and quality product to your client.
I'd suggest you use it for any system, or part of a system, which models real-world behavior. In other words, it's particularly well suited for enterprise development. I would not use it for throw-away/utility programs. I would not use it for parts of a system that are problematic to test (UI is a common example, but that isn't always the case)
The greatest pitfall is that developers test too large a unit, or they consider a method a unit. This is particularly true if you don't understand Inversion of Control - in which case your unit tests will always turn into end-to-end integration testing. Unit test should test individual behaviors - and most methods have many behaviors.
The greatest misconception is that programmers shouldn't test. Only bad or lazy programmers believe that. Should the guy building your roof not test it? Should the doctor replacing a heart valve not test the new valve? Only a programmer can test that his code does what he intended it to do (QA can test edge cases - how code behaves when it's told to do things the programmer didn't intend, and the client can do acceptance test - does the code do what what the client paid for it to do)
A: I would like to recommend the xUnit Testing Patterns book by Gerard Meszaros. It's large but is a great resource on unit testing. Here is a link to his web site where he discusses the basics of unit testing. http://xunitpatterns.com/XUnitBasics.html
A: I use unit tests to save time.
When building business logic (or data access) testing functionality can often involve typing stuff into a lot of screens that may or may not be finished yet. Automating these tests saves time.
For me unit tests are a kind of modularised test harness. There is usually at least one test per public function. I write additional tests to cover various behaviours.
All the special cases that you thought of when developing the code can be recorded in the code in the unit tests. The unit tests also become a source of examples on how to use the code.
It is a lot faster for me to discover that my new code breaks something in my unit tests then to check in the code and have some front-end developer find a problem.
For data access testing I try to write tests that either have no change or clean up after themselves.
Unit tests aren’t going to be able to solve all the testing requirements. They will be able to save development time and test core parts of the application.
A: The main difference of unit testing, as opposed to "just opening a new project and test this specific code" is that it's automated, thus repeatable.
If you test your code manually, it may convince you that the code is working perfectly - in its current state. But what about a week later, when you made a slight modification in it? Are you willing to retest it again by hand whenever anything changes in your code? Most probably not :-(
But if you can run your tests anytime, with a single click, exactly the same way, within a few seconds, then they will show you immediately whenever something is broken. And if you also integrate the unit tests into your automated build process, they will alert you to bugs even in cases where a seemingly completely unrelated change broke something in a distant part of the codebase - when it would not even occur to you that there is a need to retest that particular functionality.
This is the main advantage of unit tests over hand testing. But wait, there is more:
*
*unit tests shorten the development feedback loop dramatically: with a separate testing department it may take weeks for you to know that there is a bug in your code, by which time you have already forgotten much of the context, thus it may take you hours to find and fix the bug; OTOH with unit tests, the feedback cycle is measured in seconds, and the bug fix process is typically along the lines of an "oh sh*t, I forgot to check for that condition here" :-)
*unit tests effectively document (your understanding of) the behaviour of your code
*unit testing forces you to reevaluate your design choices, which results in simpler, cleaner design
Unit testing frameworks, in turn, make it easy for you to write and run your tests.
A: This is my take on it. I would say unit testing is the practice of writing software tests to verify that your real software does what it is meant to. This started with jUnit in the Java world and has become a best practice in PHP as well with SimpleTest and phpUnit. It's a core practice of Extreme Programming and helps you to be sure that your software still works as intended after editing. If you have sufficient test coverage, you can do major refactoring, bug fixing or add features rapidly with much less fear of introducing other problems.
It's most effective when all unit tests can be run automatically.
Unit testing is generally associated with OO development. The basic idea is to create a script which sets up the environment for your code and then exercises it; you write assertions, specify the intended output that you should receive and then execute your test script using a framework such as those mentioned above.
The framework will run all the tests against your code and then report back success or failure of each test. phpUnit is run from the Linux command line by default, though there are HTTP interfaces available for it. SimpleTest is web-based by nature and is much easier to get up and running, IMO. In combination with xDebug, phpUnit can give you automated statistics for code coverage which some people find very useful.
Some teams write hooks from their subversion repository so that unit tests are run automatically whenever you commit changes.
It's good practice to keep your unit tests in the same repository as your application.
A: LibrarIES like NUnit, xUnit or JUnit are just mandatory if you want to develop your projects using the TDD approach popularized by Kent Beck:
You can read Introduction to Test Driven Development (TDD) or Kent Beck's book Test Driven Development: By Example.
Then, if you want to be sure your tests cover a "good" part of your code, you can use software like NCover, JCover, PartCover or whatever. They'll tell you the coverage percentage of your code. Depending on how much you're adept at TDD, you'll know if you've practiced it well enough :)
A: I was never taught unit testing at university, and it took me a while to "get" it. I read about it, went "ah, right, automated testing, that could be cool I guess", and then I forgot about it.
It took quite a bit longer before I really figured out the point: Let's say you're working on a large system and you write a small module. It compiles, you put it through its paces, it works great, you move on to the next task. Nine months down the line and two versions later someone else makes a change to some seemingly unrelated part of the program, and it breaks the module. Worse, they test their changes, and their code works, but they don't test your module; hell, they may not even know your module exists.
And now you've got a problem: broken code is in the trunk and nobody even knows. The best case is an internal tester finds it before you ship, but fixing code that late in the game is expensive. And if no internal tester finds it...well, that can get very expensive indeed.
The solution is unit tests. They'll catch problems when you write code - which is fine - but you could have done that by hand. The real payoff is that they'll catch problems nine months down the line when you're now working on a completely different project, but a summer intern thinks it'll look tidier if those parameters were in alphabetical order - and then the unit test you wrote way back fails, and someone throws things at the intern until he changes the parameter order back. That's the "why" of unit tests. :-)
A: Unit-testing is the testing of a unit of code (e.g. a single function) without the need for the infrastructure that that unit of code relies on. i.e. test it in isolation.
If, for example, the function that you're testing connects to a database and does an update, in a unit test you might not want to do that update. You would if it were an integration test but in this case it's not.
So a unit test would exercise the functionality enclosed in the "function" you're testing without side effects of the database update.
Say your function retrieved some numbers from a database and then performed a standard deviation calculation. What are you trying to test here? That the standard deviation is calculated correctly or that the data is returned from the database?
In a unit test you just want to test that the standard deviation is calculated correctly. In an integration test you want to test the standard deviation calculation and the database retrieval.
A: Use Testivus. All you need to know is right there :)
A: Unit testing is about writing code that tests your application code.
The Unit part of the name is about the intention to test small units of code (one method for example) at a time.
xUnit is there to help with this testing - they are frameworks that assist with this. Part of that is automated test runners that tell you what test fail and which ones pass.
They also have facilities to setup common code that you need in each test before hand and tear it down when all tests have finished.
You can have a test to check that an expected exception has been thrown, without having to write the whole try catch block yourself.
A: Unit testing is a practice to make sure that the function or module which you are going to implement is going to behave as expected (requirements) and also to make sure how it behaves in scenarios like boundary conditions, and invalid input.
xUnit, NUnit, mbUnit, etc. are tools which help you in writing the tests.
A: I think the point that you don't understand is that unit testing frameworks like NUnit (and the like) will help you in automating small to medium-sized tests. Usually you can run the tests in a GUI (that's the case with NUnit, for instance) by simply clicking a button and then - hopefully - see the progress bar stay green. If it turns red, the framework shows you which test failed and what exactly went wrong. In a normal unit test, you often use assertions, e.g. Assert.AreEqual(expectedValue, actualValue, "some description") - so if the two values are unequal you will see an error saying "some description: expected <expectedValue> but was <actualValue>".
So as a conclusion unit testing will make testing faster and a lot more comfortable for developers. You can run all the unit tests before committing new code so that you don't break the build process of other developers on the same project.
A: Unit testing is, roughly speaking, testing bits of your code in isolation with test code. The immediate advantages that come to mind are:
*
*Running the tests becomes automate-able and repeatable
*You can test at a much more granular level than point-and-click testing via a GUI
Note that if your test code writes to a file, opens a database connection or does something over the network, it's more appropriately categorized as an integration test. Integration tests are a good thing, but should not be confused with unit tests. Unit test code should be short, sweet and quick to execute.
Another way to look at unit testing is that you write the tests first. This is known as Test-Driven Development (TDD for short). TDD brings additional advantages:
*
*You don't write speculative "I might need this in the future" code -- just enough to make the tests pass
*The code you've written is always covered by tests
*By writing the test first, you're forced into thinking about how you want to call the code, which usually improves the design of the code in the long run.
If you're not doing unit testing now, I recommend you get started on it. Get a good book, practically any xUnit-book will do because the concepts are very much transferable between them.
Sometimes writing unit tests can be painful. When it gets that way, try to find someone to help you, and resist the temptation to "just write the damn code". Unit testing is a lot like washing the dishes. It's not always pleasant, but it keeps your metaphorical kitchen clean, and you really want it to be clean. :)
Edit: One misconception comes to mind, although I'm not sure if it's so common. I've heard a project manager say that unit tests made the team write all the code twice. If it looks and feels that way, well, you're doing it wrong. Not only does writing the tests usually speed up development, but it also gives you a convenient "now I'm done" indicator that you wouldn't have otherwise.
A: Test Driven Development has sort of taken over the term Unit Test. As an old timer I will mention the more generic definition of it.
Unit Test also means testing a single component in a larger system. This single component could be a dll, exe, class library, etc. It could even be a single system in a multi-system application. So ultimately Unit Test ends up being the testing of whatever you want to call a single piece of a larger system.
You would then move up to integrated or system testing by testing how all the components work together.
A: First of all, whether speaking about Unit testing or any other kinds of automated testing (Integration, Load, UI testing etc.), the key difference from what you suggest is that it is automated, repeatable and it doesn't require any human resources to be consumed (= nobody has to perform the tests, they usually run at a press of a button).
A: Chipping in on the philosophical pros of unit testing and TDD here are a few of they key "lightbulb" observations which struck me on my tentative first steps on the road to TDD enlightenment (none original or necessarily news)...
*
*TDD does NOT mean writing twice the amount of code. Test code is typically fairly quick and painless to write and is a key part of your design process and critically.
*TDD helps you to realize when to stop coding! Your tests give you confidence that you've done enough for now and can stop tweaking and move on to the next thing.
*The tests and the code work together to achieve better code. Your code could be bad / buggy. Your TEST could be bad / buggy. In TDD you are banking on the chances of BOTH being bad / buggy being fairly low. Often its the test that needs fixing but that's still a good outcome.
*TDD helps with coding constipation. You know that feeling that you have so much to do you barely know where to start? It's Friday afternoon, if you just procrastinate for a couple more hours... TDD allows you to flesh out very quickly what you think you need to do, and gets your coding moving quickly. Also, like lab rats, I think we all respond to that big green light and work harder to see it again!
*In a similar vein, these designer types can SEE what they're working on. They can wander off for a juice / cigarette / iphone break and return to a monitor that immediately gives them a visual cue as to where they got to. TDD gives us something similar. It's easier to see where we got to when life intervenes...
*I think it was Fowler who said: "Imperfect tests, run frequently, are much better than perfect tests that are never written at all". I interprete this as giving me permission to write tests where I think they'll be most useful even if the rest of my code coverage is woefully incomplete.
*TDD helps in all kinds of surprising ways down the line. Good unit tests can help document what something is supposed to do, they can help you migrate code from one project to another and give you an unwarranted feeling of superiority over your non-testing colleagues :)
This presentation is an excellent introduction to all the yummy goodness testing entails.
A: I went to a presentation on unit testing at FoxForward 2007 and was told never to unit test anything that works with data. After all, if you test on live data, the results are unpredictable, and if you don't test on live data, you're not actually testing the code you wrote. Unfortunately, that's most of the coding I do these days. :-)
I did take a shot at TDD recently when I was writing a routine to save and restore settings. First, I verified that I could create the storage object. Then, that it had the method I needed to call. Then, that I could call it. Then, that I could pass it parameters. Then, that I could pass it specific parameters. And so on, until I was finally verifying that it would save the specified setting, allow me to change it, and then restore it, for several different syntaxes.
I didn't get to the end, because I needed-the-routine-now-dammit, but it was a good exercise.
A:
What do you do if you are given a pile of crap and seem like you are stuck in a perpetual state of cleanup that you know with the addition of any new feature or code can break the current set because the current software is like a house of cards?
How can we do unit testing then?
You start small. The project I just got into had no unit testing until a few months ago. When coverage was that low, we would simply pick a file that had no coverage and click "add tests".
Right now we're up to over 40%, and we've managed to pick off most of the low-hanging fruit.
(The best part is that even at this low level of coverage, we've already run into many instances of the code doing the wrong thing, and the testing caught it. That's a huge motivator to push people to add more testing.)
A: This answers why you should be doing unit testing.
The 3 videos below cover unit testing in javascript but the general principles apply across most languages.
Unit Testing: Minutes Now Will Save Hours Later - Eric Mann - https://www.youtube.com/watch?v=_UmmaPe8Bzc
JS Unit Testing (very good) - https://www.youtube.com/watch?v=-IYqgx8JxlU
Writing Testable JavaScript - https://www.youtube.com/watch?v=OzjogCFO4Zo
Now I'm just learning about the subject so I may not be 100% correct and there's more to it than what I'm describing here but my basic understanding of unit testing is that you write some test code (which is kept separate from your main code) that calls a function in your main code with input (arguments) that the function requires and the code then checks if it gets back a valid return value. If it does get back a valid value the unit testing framework that you're using to run the tests shows a green light (all good) if the value is invalid you get a red light and you then can fix the problem straight away before you release the new code to production, without testing you may actually not have caught the error.
So you write tests for you current code and create the code so that it passes the test. Months later you or someone else need to modify the function in your main code, because earlier you had already written test code for that function you now run again and the test may fail because the coder introduced a logic error in the function or return something completely different than what that function is supposed to return. Again without the test in place that error might be hard to track down as it can possibly affect other code as well and will go unnoticed.
Also the fact that you have a computer program that runs through your code and tests it instead of you manually doing it in the browser page by page saves time (unit testing for javascript). Let's say that you modify a function that is used by some script on a web page and it works all well and good for its new intended purpose. But, let's also say for arguments sake that there is another function you have somewhere else in your code that depends on that newly modified function for it to operate properly. This dependent function may now stop working because of the changes that you've made to the first function, however without tests in place that are run automatically by your computer you will not notice that there's a problem with that function until it is actually executed and you'll have to manually navigate to a web page that includes the script which executes the dependent function, only then you notice that there's a bug because of the change that you made to the first function.
To reiterate, having tests that are run while developing your application will catch these kinds of problems as you're coding. Not having the tests in place you'd have to manually go through your whole application and even then it can be hard to spot the bug, naively you send it out into production and after a while a kind user sends you a bug report (which won't be as good as your error messages in a testing framework).
It's quite confusing when you first hear of the subject and you think to yourself, am I not already testing my code? And the code that you've written is working like it is supposed to already, "why do I need another framework?"... Yes you are already testing your code but a computer is better at doing it. You just have to write good enough tests for a function/unit of code once and the rest is taken care of for you by the mighty cpu instead of you having to manually check that all of your code is still working when you make a change to your code.
Also, you don't have to unit test your code if you don't want to but it pays off as your project/code base starts to grow larger as the chances of introducing bugs increases.
A: Unit-testing and TDD in general enables you to have shorter feedback cycles about the software you are writing. Instead of having a large test phase at the very end of the implementation, you incrementally test everything you write. This increases code quality very much, as you immediately see, where you might have bugs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "228"
} |
Q: What are the current best options for parallelizing a CPU-intensive .NET app? This is an open-ended question. What approaches should I consider?
A: There are some parallel extensions to .NET that are currently in testing and available at Microsoft's Parallel Computing Developer Center. They have a few interesting items that you would expect like Parallel foreach and a parallel version of LINQ called PLINQ. Some of the best information about the extensions is on Channel 9.
A: I think we could also include non-.NET-specific approaches to parallel processing if those are among the best options to consider.
A: @Larsenal
If you want to branch outside of .NET there has been a lot of discussion about Intel's Threading Building Blocks which is a parallel library for C++.
A: Your first step is to find and understand the parallelism in your problem. It is really easy to write multi-threaded code that performs no better than the single-threaded code it replaces. "Patterns for Parallel Programming" (Amazon) is a great introduction to the key concepts.
Once you have a workable design, start reading the articles in the "Concurrency" topic in the MSDN Magazine archives (link), particularly anything written by Jeff Richter. Those will give you the nuts and bolts stuff on the threading constructs specific to Windows and .NET. (The multi-threading section in Richter's "CLR via C# (Amazon)is short, but very insightful - highly recommended.)
A: There are many options and the best solution will depend on the nature of the problem you are trying to solve. If you are trying to solve an embarassingly parallel problem then dividing and parallelising the tasks will be trivial. In that case the challenge will come in distributing and managing the data used.
Some suggestions would be:
*
*ICE Grid which has bindings for .Net and other common languages
*Velocity which is Microsoft's version of Oracle (Tangersol) Coherence
*The forthcoming HPC offering from Microsoft Compute Cluster Server
*Data Synapse Grid Server
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Is Windows Server 2008 "Server Core" appropriate for a SQL Server instance? I'm setting up a dedicated SQL Server 2005 box on Windows Server 2008 this week, and would like to pare it down to be as barebones as possible while still being fully functional.
To that end, the "Server Core" option sounds appealing, but I'm not clear about whether or not I can run SQL Server on that SKU. Several services are addressed on the Microsoft website, but I don't see any indication about SQL Server.
Does anyone know definitively?
A: ASP.Net will be enabled on server core in R2.
A: No. For some things, you will need the .net Framework (like reporting services), and you can't install it (in a supported way) in a server core.
A: Server Core 2008 R2 can run Sql Server, but this is unsupported (for now). Check http://www.nullsession.com/2009/06/02/sql-server-2008-on-server-core-2008-r2/ for an article + video on how it's done.
A: Not sure how credible this source is, but:
The Windows Server 2008 Core edition can:
*
*Run the file server role.
*Run the Hyper-V virtualization server role.
*Run the Directory Services role.
*Run the DHCP server role.
*Run the IIS Web server role.
*Run the DNS server role.
*Run Active Directory Lightweight Directory Services.
*Run the print server role.
The Windows Server 2008 Core edition cannot:
*
*Run a SQL Server.
*Run an Exchange Server.
*Run Internet Explorer.
*Run Windows Explorer.
*Host a remote desktop session.
*Run MMC snap-in consoles locally.
A: Server Core won't be very useful (to me at least, and I think many others as well) until they get a version of .Net framework on it. Maybe a specialized subset like they have in the Compact Framework on smart phones.
A: Following are new features for Server 2008 R2 Server Core:
*
*.NET Framework – 2.0, 3.0, 3.5.1, 4.0 are now supported on Server Core installation
*ASP.NET – as .NET is now supported on Server Core R2 ASP.NET can be enabled
*PowerShell
*AD CS – AD Certificate Services role can be installed on Server Core R2 system
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: ASP.Net Custom Client-Side Validation I have a custom validation function in JavaScript in a user control on a .Net 2.0 web site which checks to see that the fee paid is not in excess of the fee amount due.
I've placed the validator code in the ascx file, and I have also tried using Page.ClientScript.RegisterClientScriptBlock() and in both cases the validation fires, but cannot find the JavaScript function.
The output in Firefox's error console is "feeAmountCheck is not defined". Here is the function (this was taken directly from firefox->view source)
<script type="text/javascript">
function feeAmountCheck(source, arguments)
{
var amountDue = document.getElementById('ctl00_footerContentHolder_Fees1_FeeDue');
var amountPaid = document.getElementById('ctl00_footerContentHolder_Fees1_FeePaid');
if (amountDue.value > 0 && amountDue >= amountPaid)
{
arguments.IsValid = true;
}
else
{
arguments.IsValid = false;
}
return arguments;
}
</script>
Any ideas as to why the function isn't being found? How can I remedy this without having to add the function to my master page or consuming page?
A: When you're using .Net 2.0 and Ajax - you should use:
ScriptManager.RegisterClientScriptBlock
It will work better in Ajax environments then the old Page.ClientScript version
A: Try changing the argument names to sender and args. And, after you have it working, switch the call over to ScriptManager.RegisterClientScriptBlock, regardless of AJAX use.
A: While I would still like an answer to why my javascript wasn't being recognized, the solution I found in the meantime (and should have done in the first place) is to use an Asp:CompareValidator instead of an Asp:CustomValidator.
A: Also you could use:
var amountDue = document.getElementById('<%=YourControlName.ClientID%>');
That will automatically resolve the client id for the element without you having to figure out that it's called 'ctl00_footerContentHolder_Fees1_FeeDue'.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: Make XAMPP / Apache serve file outside of htdocs folder Is it possible to configure xampp to serve up a file outside of the htdocs directory?
For instance, say I have a file located as follows:
C:\projects\transitCalculator\trunk\TransitCalculator.php
and my xampp files are normally served out from:
C:\xampp\htdocs\
(because that's the default configuration) Is there some way to make Apache recognize and serve up my TransitCalculator.php file without moving it under htdocs? Preferably I'd like Apache to serve up/have access to the entire contents of the projects directory, and I don't want to move the projects directory under htdocs.
edit: edited to add Apache to the question title to make Q/A more "searchable"
A: Solution to allow Apache 2 to host websites outside of htdocs:
Underneath the "DocumentRoot" directive in httpd.conf, you should see a directory block. Replace this directory block with:
<Directory />
Options FollowSymLinks
AllowOverride All
Allow from all
</Directory>
REMEMBER NOT TO USE THIS CONFIGURATION IN A REAL ENVIRONMENT
A: A VirtualHost would also work for this and may work better for you as you can host several projects without the need for subdirectories. Here's how you do it:
httpd.conf (or extra\httpd-vhosts.conf relative to httpd.conf. Trailing slashes "\" might cause it not to work):
NameVirtualHost *:80
# ...
<VirtualHost *:80>
DocumentRoot C:\projects\transitCalculator\trunk\
ServerName transitcalculator.localhost
<Directory C:\projects\transitCalculator\trunk\>
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
HOSTS file (c:\windows\system32\drivers\etc\hosts usually):
# localhost entries
127.0.0.1 localhost transitcalculator.localhost
Now restart XAMPP and you should be able to access http://transitcalculator.localhost/ and it will map straight to that directory.
This can be helpful if you're trying to replicate a production environment where you're developing a site that will sit on the root of a domain name. You can, for example, point to files with absolute paths that will carry over to the server:
<img src="/images/logo.png" alt="My Logo" />
whereas in an environment using aliases or subdirectories, you'd need keep track of exactly where the "images" directory was relative to the current file.
A: Ok, per pix0r's, Sparks' and Dave's answers it looks like there are three ways to do this:
Virtual Hosts
*
*Open C:\xampp\apache\conf\extra\httpd-vhosts.conf.
*Un-comment ~line 19 (NameVirtualHost *:80).
*Add your virtual host (~line 36):
<VirtualHost *:80>
DocumentRoot C:\Projects\transitCalculator\trunk
ServerName transitcalculator.localhost
<Directory C:\Projects\transitCalculator\trunk>
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
*Open your hosts file (C:\Windows\System32\drivers\etc\hosts).
*Add
127.0.0.1 transitcalculator.localhost #transitCalculator
to the end of the file (before the Spybot - Search & Destroy stuff if you have that installed).
*Save (You might have to save it to the desktop, change the permissions on the old hosts file (right click > properties), and copy the new one into the directory over the old one (or rename the old one) if you are using Vista and have trouble).
*Restart Apache.
Now you can access that directory by browsing to http://transitcalculator.localhost/.
Make an Alias
*
*Starting ~line 200 of your http.conf file, copy everything between <Directory "C:/xampp/htdocs"> and </Directory> (~line 232) and paste it immediately below with C:/xampp/htdocs replaced with your desired directory (in this case C:/Projects) to give your server the correct permissions for the new directory.
*Find the <IfModule alias_module></IfModule> section (~line 300) and add
Alias /transitCalculator "C:/Projects/transitCalculator/trunk"
(or whatever is relevant to your desires) below the Alias comment block, inside the module tags.
Change your document root
*
*Edit ~line 176 in C:\xampp\apache\conf\httpd.conf; change DocumentRoot "C:/xampp/htdocs" to #DocumentRoot "C:/Projects" (or whatever you want).
*Edit ~line 203 to match your new location (in this case C:/Projects).
Notes:
*
*You have to use forward slashes "/" instead of back slashes "\".
*Don't include the trailing "/" at the end.
*restart your server.
A: You can set Apache to serve pages from anywhere with any restrictions but it's normally distributed in a more secure form.
Editing your apache files (http.conf is one of the more common names) will allow you to set any folder so it appears in your webroot.
EDIT:
alias myapp c:\myapp\
I've edited my answer to include the format for creating an alias in the http.conf file which is sort of like a shortcut in windows or a symlink under un*x where Apache 'pretends' a folder is in the webroot. This is probably going to be more useful to you in the long term.
A: If you're trying to get XAMPP to use a network drive as your document root you have to use UNC paths in httpd.conf. XAMPP will not recognize your mapped network drives.
For example the following won't work,
DocumentRoot "X:/webroot"
But this will,
DocumentRoot "//192.168.10.100/webroot" (note the forward slashes, not back slashes)
A: You can relocate it by editing the DocumentRoot setting in XAMPP\apache\conf\httpd.conf.
It should currently be:
C:/xampp/htdocs
Change it to:
C:/projects/transitCalculator/trunk
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "329"
} |
Q: Bandwith throttling in IIS 6 by IP Address I am writing an application that downloads large files in the background. All clients are logged in locally, or through a VPN. When they are logged in locally, I do not want to throttle downloads. However, I would like to limit downloads to 10 KBps when the user is connected via VPN. I can differentiate between these users by IP Address range.
Since this is an AIR Application, I figure I will throttle via server-side since I can do it from either the server itself (IIS 6) or the web service (asp.net / C#).
Throttling through IIS 6 seems to work fine, but it seems like it has to be done across the entire web site. Is there anyway to do this via IP? Or will I have to rig this up in .NET?
A: My first thought is this. I don't know if it would work but it would only take a few minutes to try.
Create two IIS web sites on the same server. The first site is bound to the public IP, but the second site is bound to the private IP. Both point to the same folder on the file system.
Your VPN users will be accessing via the private IP, so you can setup a "site-wide" rule on that site that will only affect VPN users. This should work for almost any IIS6 setting, including bandwidth throttling.
Worth a try, at least.
--
Edit: Tried this and it worked flawlessly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: How can I get the authenticated user name under Apache using plain HTTP authentication and PHP? First, let's get the security considerations out of the way. I'm using simple authentication under Apache for a one-off, internal use only, non-internet connected LAN, PHP web app.
How can get I the HTTP authenticated user name in PHP?
A: I think that you are after this
$username = $_SERVER['PHP_AUTH_USER'];
$password = $_SERVER['PHP_AUTH_PW'];
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
} |
Q: What is the best way to handle multiple permission types? I often encounter the following scenario where I need to offer many different types of permissions. I primarily use ASP.NET / VB.NET with SQL Server 2000.
Scenario
I want to offer a dynamic permission system that can work on different parameters. Let's say that I want to give either a department or just a specific person access to an application. And pretend that we have a number of applications that keeps growing.
In the past, I have chosen one of the following two ways that I know to do this.
*
*Use a single permission table with special columns that are used for
determining a how to apply the parameters. The special columns in
this example are TypeID and TypeAuxID. The SQL would look something
like this.
SELECT COUNT(PermissionID)
FROM application_permissions
WHERE
(TypeID = 1 AND TypeAuxID = @UserID) OR
(TypeID = 2 AND TypeAuxID = @DepartmentID)
AND ApplicationID = 1
*Use a mapping table for each type of permission, then joining them
all together.
SELECT COUNT(perm.PermissionID)
FROM application_permissions perm
LEFT JOIN application_UserPermissions emp
ON perm.ApplicationID = emp.ApplicationID
LEFT JOIN application_DepartmentPermissions dept
ON perm.ApplicationID = dept.ApplicationID
WHERE q.SectionID=@SectionID
AND (emp.UserID=@UserID OR dept.DeptID=@DeptID OR
(emp.UserID IS NULL AND dept.DeptID IS NULL)) AND ApplicationID = 1
ORDER BY q.QID ASC
My Thoughts
I hope that the examples make sense. I cobbled them together.
The first example requires less work, but neither of them feel like the best answer. Is there a better way to handle this?
A: In addition to John Downey and jdecuyper's solutions, I've also added an "Explicit Deny" bit at the end/beginning of the bitfield, so that you can perform additive permissions by group, role membership, and then subtract permissions based upon explicit deny entries, much like NTFS works, permission-wise.
A: Honestly the ASP.NET Membership / Roles features would work perfectly for the scenario you described. Writing your own tables / procs / classes is a great exercise and you can get very nice control over minute details, but after doing this myself I've concluded it's better to just use the built in .NET stuff. A lot of existing code is designed to work around it which is nice at well. Writing from scratch took me about 2 weeks and it was no where near as robust as .NETs. You have to code so much crap (password recovery, auto lockout, encryption, roles, a permission interface, tons of procs, etc) and the time could be better spent elsewhere.
Sorry if I didn't answer your question, I'm like the guy who says to learn c# when someone asks a vb question.
A: I agree with John Downey.
Personally, I sometimes use a flagged enumeration of permissions. This way you can use AND, OR, NOT and XOR bitwise operations on the enumeration's items.
"[Flags]
public enum Permission
{
VIEWUSERS = 1, // 2^0 // 0000 0001
EDITUSERS = 2, // 2^1 // 0000 0010
VIEWPRODUCTS = 4, // 2^2 // 0000 0100
EDITPRODUCTS = 8, // 2^3 // 0000 1000
VIEWCLIENTS = 16, // 2^4 // 0001 0000
EDITCLIENTS = 32, // 2^5 // 0010 0000
DELETECLIENTS = 64, // 2^6 // 0100 0000
}"
Then, you can combine several permissions using the AND bitwise operator.
For example, if a user can view & edit users, the binary result of the operation is 0000 0011 which converted to decimal is 3.
You can then store the permission of one user into a single column of your Database (in our case it would be 3).
Inside your application, you just need another bitwise operation (OR) to verify if a user has a particular permission or not.
A: The way I typically go about coding permission systems is having 6 tables.
*
*Users - this is pretty straight forward it is your typical users table
*Groups - this would be synonymous to your departments
*Roles - this is a table with all permissions generally also including a human readable name and a description
*Users_have_Groups - this is a many-to-many table of what groups a user belongs to
*Users_have_Roles - another many-to-many table of what roles are assigned to an individual user
*Groups_have_Roles - the final many-to-many table of what roles each group has
At the beginning of a users session you would run some logic that pulls out every role they have assigned, either directory or through a group. Then you code against those roles as your security permissions.
Like I said this is what I typically do but your millage may vary.
A: An approach I've used in various applications is to have a generic PermissionToken class which has a changeable Value property. Then you query the requested application, it tells you which PermissionTokens are needed in order to use it.
For example, the Shipping application might tell you it needs:
new PermissionToken()
{
Target = PermissionTokenTarget.Application,
Action = PermissionTokenAction.View,
Value = "ShippingApp"
};
This can obviously be extended to Create, Edit, Delete etc and, because of the custom Value property, any application, module or widget can define its own required permissions. YMMV, but this has always been an efficient method for me which I have found to scale well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Using object property as default for method property I'm trying to do this (which produces an unexpected T_VARIABLE error):
public function createShipment($startZip, $endZip, $weight =
$this->getDefaultWeight()){}
I don't want to put a magic number in there for weight since the object I am using has a "defaultWeight" parameter that all new shipments get if you don't specify a weight. I can't put the defaultWeight in the shipment itself, because it changes from shipment group to shipment group. Is there a better way to do it than the following?
public function createShipment($startZip, $endZip, weight = 0){
if($weight <= 0){
$weight = $this->getDefaultWeight();
}
}
A: Neat trick with Boolean OR operator:
public function createShipment($startZip, $endZip, $weight = 0){
$weight or $weight = $this->getDefaultWeight();
...
}
A: This isn't much better:
public function createShipment($startZip, $endZip, $weight=null){
$weight = !$weight ? $this->getDefaultWeight() : $weight;
}
// or...
public function createShipment($startZip, $endZip, $weight=null){
if ( !$weight )
$weight = $this->getDefaultWeight();
}
A: This will allow you to pass a weight of 0 and still work properly. Notice the === operator, this checks to see if weight matches "null" in both value and type (as opposed to ==, which is just value, so 0 == null == false).
PHP:
public function createShipment($startZip, $endZip, $weight=null){
if ($weight === null)
$weight = $this->getDefaultWeight();
}
A: You can use a static class member to hold the default:
class Shipment
{
public static $DefaultWeight = '0';
public function createShipment($startZip,$endZip,$weight=Shipment::DefaultWeight) {
// your function
}
}
A: Improving upon Kevin's answer if you are using PHP 7 you may do:
public function createShipment($startZip, $endZip, $weight=null){
$weight = $weight ?: $this->getDefaultWeight();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Modify Address Bar URL in AJAX App to Match Current State I'm writing an AJAX app, but as the user moves through the app, I'd like the URL in the address bar to update despite the lack of page reloads. Basically, I'd like for them to be able to bookmark at any point and thereby return to the current state.
How are people handling maintaining RESTfulness in AJAX apps?
A: SWFAddress works in Flash & Javascript projects and lets you create bookmarkable URLs (using the hash method mentioned above) as well as giving you back-button support.
http://www.asual.com/swfaddress/
A: The window.location.hash method is the preferred way of doing things. For an explanation of how to do it,
Ajax Patterns - Unique URLs.
YUI has an implementation of this pattern as a module, which includes IE specific work arounds for getting the back button working along with re-writing the address using the hash. YUI Browser History Manager.
Other frameworks have similar implementations as well. The important point is if you want the history to work along with the re-writing the address, the different browsers need different ways of handling it. (This is detailed in the first link article.)
IE needs an iframe based hack, where Firefox will produce double history using the same method.
A: If OP or others are still looking for a way to do modify browser history to enable state, using pushState and replaceState, as suggested by IESUS, is the 'right' way to do it now. It's main advantage over location.hash seems to be that it creates actual URLs, not just hashes. If browser history using hashes is saved, and then revisited with JavaScript disabled, the app won't work, since the hashes aren't sent to the server. However, if pushState has been used, the entire route will be sent to the server, which you can then build to respond appropriately to the routes. I saw an example where the same mustache templates were used on both the server and the client side. If the client had JavaScript enabled, he would get snappy responses by avoiding the roundtrip to the server, but the app would work perfectly fine without the JavaScript. Thus, the app can gracefully degrade in the absence of JavaScript.
Also, I believe there is some framework out there, with a name like history.js. For browsers that support HTML5, it uses pushState, but if the browser doesn't support that, it automatically falls back to using hashes.
A: Check if user is 'in' the page, when you click on the URL bar, JavaScript says you are out of page.
If you change the URL bar and press 'ENTER' with the symbol '#' within it then you go into the page again, without click on the page manually with mouse cursor, then a keyboard event command (document.onkeypress) from JavaScript will be able to check if it's enter and active the JavaScript for redirection.
You can check if user is IN the page with window.onfocus and check if he's out with window.onblur.
Yeah, it's possible.
;)
A: Look at sites like book.cakephp.org. This site changes the URL without using the hash and use AJAX. I'm not sure how it does it exactly but I've been trying to figure it out. If anyone knows, let me know.
Also github.com when looking at a navigating within a certain project.
A: It is unlikely the writer wants to reload or redirect his visitor when using Ajax.
But why not use HTML5's pushState/replaceState?
You'll be able to modify the addressbar as much as you like. Get natural looking urls, with AJAX.
Check out the code on my latest project:
http://iesus.se/
A: The way to do this is to manipulate location.hash when AJAX updates result in a state change that you'd like to have a discrete URL. For example, if your page's url is:
http://example.com/
If a client side function executed this code:
// AJAX code to display the "foo" state goes here.
location.hash = 'foo';
Then, the URL displayed in the browser would be updated to:
http://example.com/#foo
This allows users to bookmark the "foo" state of the page, and use the browser history to navigate between states.
With this mechanism in place, you'll then need to parse out the hash portion of the URL on the client side using JavaScript to create and display the appropriate initial state, as fragment identifiers (the part after the #) are not sent to the server.
Ben Alman's hashchange plugin makes the latter a breeze if you're using jQuery.
A: This is similar to what Kevin said. You can have your client state as some javascript object, and when you want to save the state, you serialize the object (using JSON and base64 encoding). You can then set the fragment of the href to this string.
var encodedState = base64(json(state));
var newLocation = oldLocationWithoutFragment + "#" + encodedState;
document.location = newLocation; // adds new entry in browser history
document.location.replace(newLocation); // replaces current entry in browser history
The first way will treat the new state as a new location (so the back button will take them to the previous location). The latter does not.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "167"
} |
Q: How do you express binary literals in Python? How do you express an integer as a binary number with Python literals?
I was easily able to find the answer for hex:
>>> 0x12AF
4783
>>> 0x100
256
and octal:
>>> 01267
695
>>> 0100
64
How do you use literals to express binary in Python?
Summary of Answers
*
*Python 2.5 and earlier: can express binary using int('01010101111',2) but not with a literal.
*Python 2.5 and earlier: there is no way to express binary literals.
*Python 2.6 beta: You can do like so: 0b1100111 or 0B1100111.
*Python 2.6 beta: will also allow 0o27 or 0O27 (second character is the letter O) to represent an octal.
*Python 3.0 beta: Same as 2.6, but will no longer allow the older 027 syntax for octals.
A: >>> print int('01010101111',2)
687
>>> print int('11111111',2)
255
Another way.
A:
How do you express binary literals in Python?
They're not "binary" literals, but rather, "integer literals". You can express integer literals with a binary format with a 0 followed by a B or b followed by a series of zeros and ones, for example:
>>> 0b0010101010
170
>>> 0B010101
21
From the Python 3 docs, these are the ways of providing integer literals in Python:
Integer literals are described by the following lexical definitions:
integer ::= decinteger | bininteger | octinteger | hexinteger
decinteger ::= nonzerodigit (["_"] digit)* | "0"+ (["_"] "0")*
bininteger ::= "0" ("b" | "B") (["_"] bindigit)+
octinteger ::= "0" ("o" | "O") (["_"] octdigit)+
hexinteger ::= "0" ("x" | "X") (["_"] hexdigit)+
nonzerodigit ::= "1"..."9"
digit ::= "0"..."9"
bindigit ::= "0" | "1"
octdigit ::= "0"..."7"
hexdigit ::= digit | "a"..."f" | "A"..."F"
There is no limit for the length of integer literals apart from what
can be stored in available memory.
Note that leading zeros in a non-zero decimal number are not allowed.
This is for disambiguation with C-style octal literals, which Python
used before version 3.0.
Some examples of integer literals:
7 2147483647 0o177 0b100110111
3 79228162514264337593543950336 0o377 0xdeadbeef
100_000_000_000 0b_1110_0101
Changed in version 3.6: Underscores are now allowed for grouping purposes in literals.
Other ways of expressing binary:
You can have the zeros and ones in a string object which can be manipulated (although you should probably just do bitwise operations on the integer in most cases) - just pass int the string of zeros and ones and the base you are converting from (2):
>>> int('010101', 2)
21
You can optionally have the 0b or 0B prefix:
>>> int('0b0010101010', 2)
170
If you pass it 0 as the base, it will assume base 10 if the string doesn't specify with a prefix:
>>> int('10101', 0)
10101
>>> int('0b10101', 0)
21
Converting from int back to human readable binary:
You can pass an integer to bin to see the string representation of a binary literal:
>>> bin(21)
'0b10101'
And you can combine bin and int to go back and forth:
>>> bin(int('010101', 2))
'0b10101'
You can use a format specification as well, if you want to have minimum width with preceding zeros:
>>> format(int('010101', 2), '{fill}{width}b'.format(width=10, fill=0))
'0000010101'
>>> format(int('010101', 2), '010b')
'0000010101'
A: For reference—future Python possibilities:
Starting with Python 2.6 you can express binary literals using the prefix 0b or 0B:
>>> 0b101111
47
You can also use the new bin function to get the binary representation of a number:
>>> bin(173)
'0b10101101'
Development version of the documentation: What's New in Python 2.6
A: I've tried this in Python 3.6.9
Convert Binary to Decimal
>>> 0b101111
47
>>> int('101111',2)
47
Convert Decimal to binary
>>> bin(47)
'0b101111'
Place a 0 as the second parameter python assumes it as decimal.
>>> int('101111',0)
101111
A: 0 in the start here specifies that the base is 8 (not 10), which is pretty easy to see:
>>> int('010101', 0)
4161
If you don't start with a 0, then python assumes the number is base 10.
>>> int('10101', 0)
10101
A: Another good method to get an integer representation from binary is to use eval()
Like so:
def getInt(binNum = 0):
return eval(eval('0b' + str(n)))
I guess this is a way to do it too.
I hope this is a satisfactory answer :D
A: I am pretty sure this is one of the things due to change in Python 3.0 with perhaps bin() to go with hex() and oct().
EDIT:
lbrandy's answer is correct in all cases.
A: As far as I can tell Python, up through 2.5, only supports hexadecimal & octal literals. I did find some discussions about adding binary to future versions but nothing definite.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "394"
} |
Q: What are the preferred versions of Vim and Emacs on Mac OS X? For those of us that like to use the graphical version of Vim or Emacs, instead of the console version, which version do you recommend?
For Vim, there's Mac OS X Vim, MacVim, Vim-Cocoa.
For Emacs, CarbonEmacs, XEmacs, and Aquamacs.
Are there more? Which of these are ready for prime-time? If it's a tough call, what are the trade-offs? Are all of these still being maintained?
No discussion of Vim versus Emacs, if you don't mind, or comparisons with other editors.
A: I've tried Aquamacs and it's very usable and looks pretty good. Supports both traditional Mac OS style keyboard shortcuts (command-O, command-S) and the Control/Meta shortcuts for those raised on traditional Emacs. It is definitely more Mac-like than Carbon Emacs. It seems stable and fast, but I am not an Emacs guru so I don't stress it all that much when I use it. I can't speak to the extensiveness of the included ellipse packages, either.
Someone syncs Carbon Emacs with the upstream tree quarterly I think. Aquamacs has a more irregular schedule, but it's seen some pretty major updates over the last year.
A: I just download the Emacs source from the GNU site and build it myself. I don't like too many Mac-specific features, because I want Emacs behavior to be consistent on all the platforms I use.
A: MacVim works well and certainly looks more mature than Vim-Cocoa, moreover there is a Cocoa plugin architecture in the pipeline for MacVim (and someone is already working on a TextMate style file browser tray plugin which is a huge ++ IMHO).
There was also a Carbon version of Vim, but this didn't offer a great deal over the Terminal version. i.e. only allowed one window open, not very OSX in appearance...
Aquamacs is very usable and looks pretty good. Supports both traditional Mac OS style keyboard shortcuts (command-O, command-S) and the Control/Meta shortcuts for those raised on traditional Emacs. It is definitely more Mac-like than Carbon Emacs. It seems stable and fast, but I am not an Emacs guru so I don't stress it all that much when I use it. I can't speak to the extensiveness of the included elisp packages, either.
Someone syncs Carbon Emacs with the upstream tree quarterly I think. Aquamacs has a more irregular schedule, but it's seen some pretty major updates over the last year.
GNU Emacs for OSX can be found at emacsformacosx.com. In addition to the latest stable release, there are also pre-release test builds and nightly builds, and Atom feeds are provided for tracking all three release types.
A: I like the Nextstep-derived Emacs.app formerly at http://emacs-app.sourceforge.net/ now integrated in Emacs-23 CVS (as of August 2008).
Emacs.app feels more zippy than Aquamacs to me but its just bare CVS-Emacs and doesn't come with the same amount of stuff (you have to install your own AucTeX etc.).
A: Personally, I've been using fink to install xemacs. It requires X but I've been
using xemacs for so long that I need what it has.
Additionally, I have installed gnu emacs. It's nice because it is a completely
integrated mac os x application with a dock icon and everything. I find it useful
when dragging a file on top of the gnu emacs icon to open it.
Last, I should mention that mac os x uses the emacs keystrokes all over the place.
stuff like ^A for beginning of text, ^E for end of text, ^N next line, ^P previous
line, etc... These work in most text boxes throughout the OS.
A: I'm using MacVim on Mac OS X. It's very, very nice.
A: I get all my unixish/GNU support using Fink (which provides Debian-like package control) with the emacs22-carbon package which means I also get a clickable application. It does everything I expect it to do, and automagically starts using emacs extensions loaded with fink.
Good times.
A: I use the CarbonEmacs version on the Macports program. It installs all the dependencies with just one line:
sudo port install emacs
For anyone interested in Macports (www.macports.org)
A: Some time ago, I was searching for a text editor for my new Mac. Since this was some months ago, some points might have been corrected in the meantime.
I feel that Aquamacs is by far the best OSX-Emacs. However, it feels a bit too Mac-like in some areas. For example, it rather uses several windows instead of several buffers and the coloring schemes are not "normal" Emacs-style.
If you look for a more basic set, Carbon Emacs might do it as well, though you might want to add some additional packages to add PHP support or AucTeX.
Emacs.app feels broken in my opinion. It not even opens files using drag and drop.
A: Of the emacsen for Mac OS X, I have to say that after the console version of 22, CarbonEmacs is the most usable. Aquamacs just does too many non-standard, read unexpected, things with configuration. Aquamacs questions generally aren't answered in any sort of timely manner in #emacs on freenode for whatever that is worth. It seem to be held in disdain simply because it does such a terrible job of handling standard configuration options in .emacs.
A: I love CarbonEmacs because it sticks very close to the standard GNU Emacs distribution, while still fitting in nicely with the Mac desktop. To me, it "felt" like Emacs on my Ubuntu desktop even if it looked like a Mac application.
A: Emacs 22 has worked pretty well for me.
A: I prefer Vim built from the Subversion repository. I run it in the console where I don't need to use the mouse while editing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: Integrating Visual Studio Test Project with Cruise Control I'm looking into using Visual Studio 2008's built in unit test projects instead of NUnit and I was wondering if anyone has any experience in trying to integrate this type of unit test project with Cruise Control.Net.
A: From some of the initial research it doesn't appear to be a super simple solution.
It appears that doing this involves having Visual Studio 2008 actually installed on the continuous integration server, which could be a deal breaker.
Then configure the MSTest.exe to run in the tasks list, but first you'll have to make a batch file to delete the results files from previous passes as this file's existence causes an error.
Then create a xslt to format the results and put it into the dashboard.config file.
The code project article I found has a lot more detail.
Integrating Visual Studio Team System 2008 Unit Tests with CruiseControl.NET
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: How do I give my websites an icon for iPhone? How do I set the icon that appears proper on the iPhone for the websites I have created?
A: Add this in your head section:<link rel="icon" href="/your-icon-url"/>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: Different Distributed Version Control Systems working together My office has a central Source Safe 2005 install that we use for source control. I can't change what the office uses on the server.
I develop on a laptop and would like to have a different local source control repository that can sync with the central server (when available) regardless of the what that central provider is. The reason for the request is so I can maintain a local stable branch/build for client presentations while continuing to develop without having to jump through flaming hoops. Also, as a consultant, my clients may request that I use their source control provider and flexibility here would make life easier.
Can any of the existing distributed source control clients handle that?
A: You should be able to check out the current version of the code and then create a git repository around it. Updating that and committing it to your local git repository should be painless. As should cloning it.
The only catch is that you need to have them both ignore each other (I've done something similar with SVN) by messing with the appropriate ignore files. I'm presuming SourceSafe let's you ignore things. And you'll need to do certain operations twice (like telling both that you are deleting a file).
A: Well... KernelTrap has something on this. Looks like you can use vss2svn to pipe the Source Safe repo into a Subversion repository, then use the very nice git-svn to pull into a local git repo.
I would assume the commits back to VSS would not be a smooth, automatic process using this method.
A: some day I work in a company that use VSS (and in other companies that use other less unknow SCM) but i prefer use SVN (someday I'll try GIT) for active development, for me and my group.
First of all, this situation it's only good idea, if commit to VSS are few over month, because working with other SCM (than VSS) give you more flexibility, but commint to VSS from SVN is expensive in time.
My solution was:
VSS -> SVN: I have linux script (or ant script, or XXX script) that copy from currrent update directory work of VSS to current SVN, then refresh SVN client and update/merge/commit to SVN. With this, you are update from changes of the rest of company that use VSS.
SVN -> VSS: In this way, you need a checkout of all your modify files to VSS, then you can simply use the reverse script to copy from current update SVN directory (ignore .svn directories) and copy to current update VSS directory, update and commit.
But remember, in a few case does worth your time to do this.
A: This episode of HanselMinutes covers exactly what I was hoping to hear. Apparently Git can be used locally then attached to external subversion/vss repositories as need. They talk about it 14 ~ 15 minutes in.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Hiding inherited members I'm looking for some way to effectively hide inherited members. I have a library of classes which inherit from common base classes. Some of the more recent descendant classes inherit dependency properties which have become vestigial and can be a little confusing when using IntelliSense or using the classes in a visual designer.
These classes are all controls that are written to be compiled for either WPF or Silverlight 2.0. I know about ICustomTypeDescriptor and ICustomPropertyProvider, but I'm pretty certain those can't be used in Silverlight.
It's not as much a functional issue as a usability issue. What should I do?
Update
Some of the properties that I would really like to hide come from ancestors that are not my own and because of a specific tool I'm designing for, I can't do member hiding with the new operator. (I know, it's ridiculous)
A: I think you're best least hackish way is to consider composition as opposed to inheritance.
Or, you could create an interface that has the members you want, have your derived class implement that interface, and program against the interface.
A: I know there's been several answers to this, and it's quite old now, but the simplest method to do this is just declare them as new private.
Consider an example I am currently working on, where I have an API that makes available every method in a 3rd party DLL. I have to take their methods, but I want to use a .Net property, instead of a "getThisValue" and "setThisValue" method. So, I build a second class, inherit the first, make a property that uses the get and set methods, and then override the original get and set methods as private. They're still available to anyone wanting to build something different on them, but if they just want to use the engine I'm building, then they'll be able to use properties instead of methods.
Using the double class method gets rid of any restrictions on being unable to use the new declaration to hide the members. You simply can't use override if the members are marked as virtual.
public class APIClass
{
private static const string DllName = "external.dll";
[DllImport(DllName)]
public extern unsafe uint external_setSomething(int x, uint y);
[DllImport(DllName)]
public extern unsafe uint external_getSomething(int x, uint* y);
public enum valueEnum
{
On = 0x01000000;
Off = 0x00000000;
OnWithOptions = 0x01010000;
OffWithOptions = 0x00010000;
}
}
public class APIUsageClass : APIClass
{
public int Identifier;
private APIClass m_internalInstance = new APIClass();
public valueEnum Something
{
get
{
unsafe
{
valueEnum y;
fixed (valueEnum* yPtr = &y)
{
m_internalInstance.external_getSomething(Identifier, yPtr);
}
return y;
}
}
set
{
m_internalInstance.external_setSomething(Identifier, value);
}
}
new private uint external_setSomething(int x, float y) { return 0; }
new private unsafe uint external_getSomething(int x, float* y) { return 0; }
}
Now valueEnum is available to both classes, but only the property is visible in the APIUsageClass class. The APIClass class is still available for people who want to extend the original API or use it in a different way, and the APIUsageClass is available for those who want something more simple.
Ultimately, what I'll be doing is making the APIClass internal, and only expose my inherited class.
A: To fully hide and mark not to use, including intellisense which I believe is what most readers expect
[Obsolete("Not applicable in this class.")]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
[Browsable(false), EditorBrowsable(EditorBrowsableState.Never)]
A: Override them like Michael Suggests above and to prevent folks from using the overridden (sp?) methods, mark them as obsolete:
[Obsolete("These are not supported in this class.", true)]
public override void dontcallmeanymore()
{
}
If the second parm is set to true, a compiler error will be generated if anyone tries to call that method and the string in the first parm is the message. If parm2 is false only a compiler warning will be generated.
A: While you cannot prevent usage of those inherited members to my knowledge, you should be able to hide them from IntelliSense using the EditorBrowsableAttribute:
Using System.ComponentModel;
[EditorBrowsable(EditorBrowsableState.Never)]
private string MyHiddenString = "Muahahahahahahahaha";
Edit: Just saw this in the documentation comments, which makes it kinda useless for this purpose:
There is a prominent note that states that this attribute "does not suppress members from a class in the same assembly". That is true but not complete. Actually, the attribute does not suppress members from a class in the same solution.
A: One potential thing you can do is contain the object rather than extend from the other class. This will give you the most flexibility in terms of exposing what you want to expose, but if you absolutely need the object to be of that type it is not the ideal solution (however you could expose the object from a getter).
Thus:
public class MyClass : BaseClass
{
// Your stuff here
}
Becomes:
public class MyClass
{
private BaseClass baseClass;
public void ExposeThisMethod()
{
baseClass.ExposeThisMethod();
}
}
Or:
public class MyClass
{
private BaseClass baseClass;
public BaseClass BaseClass
{
get
{
return baseClass;
}
}
}
A: I tested all of the proposed solutions and they do not really hide new members.
But this one DOES:
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public new string MyHiddenProperty
{
get { return _myHiddenProperty; }
}
But in code-behide it's still accessible, so add as well Obsolete Attribute
[Obsolete("This property is not supported in this class", true)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public new string MyHiddenProperty
{
get { return _myHiddenProperty; }
}
A: While clearly stated above that it is not possible in C# to change the access modifiers on inherited methods and properties, I overcame this issue through a sort of "fake inheritance" using implicit casting.
Example:
public class A
{
int var1;
int var2;
public A(int var1, int var2)
{
this.var1 = var1;
this.var2 = var2;
}
public void Method1(int i)
{
var1 = i;
}
public int Method2()
{
return var1+var2;
}
}
Now lets say you want a class B to inherit from class A, but want to change some accessibility or even change Method1 entirely
public class B
{
private A parent;
public B(int var1, int var2)
{
parent = new A(var1, var2);
}
int var1
{
get {return this.parent.var1;}
}
int var2
{
get {return this.parent.var2;}
set {this.parent.var2 = value;}
}
public Method1(int i)
{
this.parent.Method1(i*i);
}
private Method2()
{
this.parent.Method2();
}
public static implicit operator A(B b)
{
return b.parent;
}
}
By including the implicit cast at the end, it allows us to treat B objects as As when we need to. It can also be useful to define an implicit cast from A->B.
The biggest flaw to this approach is that you need to re-write every method/property that you intend to "inherit".
There's probably even more flaws to this approach, but I like to use it as a sort of "fake inheritance".
Note:
While this allows for changing the accessibility of public properties, it doesn't solve the issue of making protected properties public.
A: You can use an interface
public static void Main()
{
NoRemoveList<string> testList = ListFactory<string>.NewList();
testList.Add(" this is ok ");
// not ok
//testList.RemoveAt(0);
}
public interface NoRemoveList<T>
{
T this[int index] { get; }
int Count { get; }
void Add(T item);
}
public class ListFactory<T>
{
private class HiddenList: List<T>, NoRemoveList<T>
{
// no access outside
}
public static NoRemoveList<T> NewList()
{
return new HiddenList();
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
} |
Q: Reducing duplicate error handling code in C#? I've never been completely happy with the way exception handling works, there's a lot exceptions and try/catch brings to the table (stack unwinding, etc.), but it seems to break a lot of the OO model in the process.
Anyway, here's the problem:
Let's say you have some class which wraps or includes networked file IO operations (e.g. reading and writing to some file at some particular UNC path somewhere). For various reasons you don't want those IO operations to fail, so if you detect that they fail you retry them and you keep retrying them until they succeed or you reach a timeout. I already have a convenient RetryTimer class which I can instantiate and use to sleep the current thread between retries and determine when the timeout period has elapsed, etc.
The problem is that you have a bunch of IO operations in several methods of this class, and you need to wrap each of them in try-catch / retry logic.
Here's an example code snippet:
RetryTimer fileIORetryTimer = new RetryTimer(TimeSpan.FromHours(10));
bool success = false;
while (!success)
{
try
{
// do some file IO which may succeed or fail
success = true;
}
catch (IOException e)
{
if (fileIORetryTimer.HasExceededRetryTimeout)
{
throw e;
}
fileIORetryTimer.SleepUntilNextRetry();
}
}
So, how do you avoid duplicating most of this code for every file IO operation throughout the class? My solution was to use anonymous delegate blocks and a single method in the class which executed the delegate block passed to it. This allowed me to do things like this in other methods:
this.RetryFileIO( delegate()
{
// some code block
} );
I like this somewhat, but it leaves a lot to be desired. I'd like to hear how other people would solve this sort of problem.
A: Just wondering, what do you feel your method leaves to be desired? You could replace the anonymous delegate with a.. named? delegate, something like
public delegate void IoOperation(params string[] parameters);
public void FileDeleteOperation(params string[] fileName)
{
File.Delete(fileName[0]);
}
public void FileCopyOperation(params string[] fileNames)
{
File.Copy(fileNames[0], fileNames[1]);
}
public void RetryFileIO(IoOperation operation, params string[] parameters)
{
RetryTimer fileIORetryTimer = new RetryTimer(TimeSpan.FromHours(10));
bool success = false;
while (!success)
{
try
{
operation(parameters);
success = true;
}
catch (IOException e)
{
if (fileIORetryTimer.HasExceededRetryTimeout)
{
throw;
}
fileIORetryTimer.SleepUntilNextRetry();
}
}
}
public void Foo()
{
this.RetryFileIO(FileDeleteOperation, "L:\file.to.delete" );
this.RetryFileIO(FileCopyOperation, "L:\file.to.copy.source", "L:\file.to.copy.destination" );
}
A: You could also use a more OO approach:
*
*Create a base class that does the error handling and calls an abstract method to perform the concrete work. (Template Method pattern)
*Create concrete classes for each operation.
This has the advantage of naming each type of operation you perform and gives you a Command pattern - operations have been represented as objects.
A: Here's what I did recently. It has probably been done elsewhere better, but it seems pretty clean and reusable.
I have a utility method that looks like this:
public delegate void WorkMethod();
static public void DoAndRetry(WorkMethod wm, int maxRetries)
{
int curRetries = 0;
do
{
try
{
wm.Invoke();
return;
}
catch (Exception e)
{
curRetries++;
if (curRetries > maxRetries)
{
throw new Exception("Maximum retries reached", e);
}
}
} while (true);
}
Then in my application, I use c#'s Lamda expression syntax to keep things tidy:
Utility.DoAndRetry( () => ie.GoTo(url), 5);
This calls my method and retries up to 5 times. At the fifth attempt, the original exception is rethrown inside of a retry exception.
A: This looks like an excellent opportunity to have a look at Aspect Oriented Programming. Here is a good article on AOP in .NET. The general idea is that you'd extract the cross-functional concern (i.e. Retry for x hours) into a separate class and then you'd annotate any methods that need to modify their behaviour in that way. Here's how it might look (with a nice extension method on Int32)
[RetryFor( 10.Hours() )]
public void DeleteArchive()
{
//.. code to just delete the archive
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Linux shell equivalent on IIS As a LAMP developer considering moving to a .Net IIS platform, one of my concerns is the loss of productivity due to lack of shell... Has anyone else had this experience? Is there possibly a Linux shell equivalent for Windows?
A: Depending on what version of IIS you're considering, I would second lbrandy's recommendation to check out PowerShell. Microsoft is working on a PowerShell provider for IIS (specifically version 7). There is a decent post about this at http://blogs.iis.net/thomad/archive/2008/04/14/iis-7-0-powershell-provider-tech-preview-1.aspx. The upcoming version of PowerShell will also add remoting capabilities so that you can remotely manage machines. PowerShell is quite different from *NIX shells, though, so that is something to consider.
Hope this helps.
A: Are you asking about Linux shell as in an environment to work in? For that CygWin I think has been around the longest and is pretty robust: http://www.cygwin.com/
A while ago I found a windows port of all the popular linux commands I use (ls, grep, diff) and I simply unzip those to a file, add it to my PATH environment and then can run from there: http://unxutils.sourceforge.net/
Or are you talking about executing shell commands from within your code? If you're in the .NET sphere, there is the Process.Start() method that will give you a lot of options.
Hope this helps!
A: I assume you don't mean cygwin, right?
How about powershell, then?
A: If you're referring to simply accessing your IIS server from a remote location, remote desktop generally solves that problem. Assuming your server has a static IP address or a host name you can access from the internet, remote desktop is a simple and relatively secure solution.
Is there a problem with this answer? Now I have negative reputation...
A: The best way I can think of would be to use Cygwin over an OpenSSH connection.
Here's a document that explains how to do just that:
http://www.ucl.ac.uk/cert/openssh_rdp_vnc.pdf
A: Remote shell doesn't solve the productivity issue. (It merely makes things possible.)
From what I've heard, everything that the future Microsoft GUI:s do will be possible to do with powershell since the GUI:s use the same API:s as those that are available from powershell.
Personally, I love cygwin but cygwin can not help you manage Microsoft applications.
You might be surprised, however, how powerfull the Windows Scripting Host is when coupled with Window Management Instrumentation. I think IIS is fully manageable with WMI or some COM objects that can be easilly used from a JScript WSH script.
A: You should make your choice of server platform based on the environment as a whole, and that includes the admin/management interfaces supplied.
I'm afraid that if you don't like the way Windows implements management of IIS, then that's too bad. Having said that, a bit of delving around in the WMI interfaces will generally yield a solution that you should find usable. I used to do quite a bit of WMI scripting (mostly via PowerShell) in order to have a reliable environment rebuild capability.
A: If you want a Linux shell on Windows, install the Windows Subsystem for Linux on Windows 10 :
The Windows Subsystem for Linux lets developers run a GNU/Linux environment -- including most command-line tools, utilities, and applications -- directly on Windows, unmodified, without the overhead of a traditional virtual machine or dualboot setup.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: What are the correct pixel dimensions for an apple-touch-icon? I'm not sure what the correct size should be.
Many sites seem to repeat that the apple-touch-icon should be 57x57 pixels but cite a broken link as their source.
Hanselman's and playgroundblues's comments suggest different sizes including 163x163 and 60x60.
Apple's own apple.com icon is 129x129!
See my related question:
How do I give my web sites an icon for iPhone?
A: It seems that Apple guidelines as of August 3, 2010 now include the "High resolution" images (for iPhone 4) in their "required" icon sizes.
Looks like we need to provide both a 57x57 and a 114x114 image now, as well as a 640x960 title image.
See Custom Icon and Image Creation Guidelines (Javascript required) which is part of a whole document:
*
*iOS Human Interface Guidelines (2013; by Apple Inc; PDF; 26,3 MB)
A: Depends on how much detail you want it to have, it needs to have the aspect ratio of 1:1 (basically - it needs to be square)
I would go with the Apple's own 129*129
A: Apple specs specify new sizes for iOS7:
*
*60x60
*76x76
*120x120
*152x152
Whereas old sizes for iOS6 and prior are:
*
*57x57
*72x72
*114x114
*144x144
By the way, precomposed icons are deprecated.
As a consequence, to support but new devices (running iOS7) and older (iOS6 and prior), these 8 pictures must be present and the generic code is:
<link rel="apple-touch-icon" sizes="57x57" href="/apple-touch-icon-57x57.png">
<link rel="apple-touch-icon" sizes="114x114" href="/apple-touch-icon-114x114.png">
<link rel="apple-touch-icon" sizes="72x72" href="/apple-touch-icon-72x72.png">
<link rel="apple-touch-icon" sizes="144x144" href="/apple-touch-icon-144x144.png">
<link rel="apple-touch-icon" sizes="60x60" href="/apple-touch-icon-60x60.png">
<link rel="apple-touch-icon" sizes="120x120" href="/apple-touch-icon-120x120.png">
<link rel="apple-touch-icon" sizes="76x76" href="/apple-touch-icon-76x76.png">
<link rel="apple-touch-icon" sizes="152x152" href="/apple-touch-icon-152x152.png">
In addition, you should create a 152x152 picture named apple-touch-icon.png.
You might want to know that this favicon generator can generate all these pictures at once. Full disclosure: I'm the author of this site.
A: The official size is 57x57. I would recommend using the exact size simply due to the fact that it takes less memory when loaded (unless Apple caches the scaled representation). With that said, Rex is right that any square size will work
A: I don't think there is a "correct size". Since the iPhone really is running OSX, the icon rendering system is pretty robust. As long as you give it a high-quality image with the right aspect ratio and a resolution at least as high as the actual output will be, the OS will downscale very cleanly. My site uses a 158x158 and the icon looks pixel-perfect on the iPhone screen.
A: Updated list October 2014, iOS8
List for iPhone and iPad with and without retina
<link rel="apple-touch-icon" href="touch-icon-iphone.png">
<link rel="apple-touch-icon" sizes="76x76" href="touch-icon-ipad.png">
<link rel="apple-touch-icon" sizes="120x120" href="touch-icon-iphone-retina.png">
<link rel="apple-touch-icon" sizes="152x152" href="touch-icon-ipad-retina.png">
<link rel="apple-touch-icon" sizes="180x180" href="touch-icon-iphone-6-plus.png">
Update 2014 iOS 8:
For iOS 8 and iPhone 6 plus
<link rel="apple-touch-icon" sizes="180x180" href="touch-icon-iphone-6-plus.png">
iPhone 6 uses the same 120 x 120 px image as iPhone 4 and 5 the rest is the same as for iOS 7
Update 2013 iOS7:
For iOS 7 the recommended resolutions changed:
*
*for iPhone Retina from 114 x 114 px to 120 x 120 px
*for iPad Retina from 144 x 144 px to 152 x 152 px
The other resolution are still the same
*
*57 x 57 px default
*76 x 76 px for iPads without retina
A: You don't have to bother for correct size any more. If you have iTunes artwork file (i.e. file of 1024*1024 size) of your icon, then I have created this application which will provide you all the icons based on information provided here. Get the application from here, and follow the instructions in readme file to create all the required icons for iOS application.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "73"
} |
Q: What is the best way to copy a database? When I want to make a copy of a database, I always create a new empty database, and then restore a backup of the existing database into it. However, I'm wondering if this is really the least error-prone, least complicated, and most efficient way to do this?
A: It is possible to skip the step of creating the empty database. You can create the new database as part of the restore process.
This is actually the easiest and best way I know of to clone a database. You can eliminate errors by scripting the backup and restore process rather than running it through the SQL Server Management Studio
There are two other options you could explore:
*
*Detach the database, copy the .mdf file and re-attach.
*Use SQL Server Integration Services (SSIS) to copy all the objects over
I suggest sticking with backup and restore and automating if necessary.
A: Here's a dynamic sql script I've used in the past. It can be further modified but it will give you the basics. I prefer scripting it to avoid the mistakes you can make using the Management Studio:
Declare @OldDB varchar(100)
Declare @NewDB varchar(100)
Declare @vchBackupPath varchar(255)
Declare @query varchar(8000)
/*Test code to implement
Select @OldDB = 'Pubs'
Select @NewDB = 'Pubs2'
Select @vchBackupPath = '\\dbserver\C$\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\pubs.bak'
*/
SET NOCOUNT ON;
Select @query = 'Create Database ' + @NewDB
exec(@query)
Select @query = '
Declare @vBAKPath varchar(256)
declare @oldMDFName varchar(100)
declare @oldLDFName varchar(100)
declare @newMDFPath varchar(100)
declare @newLDFPath varchar(100)
declare @restQuery varchar(800)
select @vBAKPath = ''' + @vchBackupPath + '''
select @oldLDFName = name from ' + @OldDB +'.dbo.sysfiles where filename like ''%.ldf%''
select @oldMDFName = name from ' + @OldDB +'.dbo.sysfiles where filename like ''%.mdf%''
select @newMDFPath = physical_name from ' + @NewDB +'.sys.database_files where type_desc = ''ROWS''
select @newLDFPath = physical_name from ' + @NewDB +'.sys.database_files where type_desc = ''LOG''
select @restQuery = ''RESTORE DATABASE ' + @NewDB +
' FROM DISK = N'' + '''''''' + @vBAKpath + '''''''' +
'' WITH MOVE N'' + '''''''' + @oldMDFName + '''''''' +
'' TO N'' + '''''''' + @newMDFPath + '''''''' +
'', MOVE N'' + '''''''' + @oldLDFName + '''''''' +
'' TO N'' + '''''''' + @newLDFPath + '''''''' +
'', NOUNLOAD, REPLACE, STATS = 10''
exec(@restQuery)
--print @restQuery'
exec(@query)
A: Backup and Restore is the most straight-forward way I know. You have to be careful between servers as security credentials don't come with the restored database.
A: The Publish to Provider functionality has worked great for me. See Scott Gu's Blog Entry.
If you need something really robust look at redgate software's tools here...if you are doing much SQL at all, these are worth the $$.
A: ::================ BackUpAllMyDatabases.cmd ============= START
::BackUpAllMyDatabases.cmd
:: COMMAND LINE BATCH SCRIPT FOR TAKING BACKUP OF ALL DATABASES
::RUN THE SQL SCRIPT VIA THE COMMAND LINE WITH LOGGING
sqlcmd -S localhost -e -i "BackUpAllMyDatabases.sql" -o Result_Of_BackUpAllMyDatabases.log
::VIEW THE RESULTS
Result_Of_BackUpAllMyDatabases.log
::pause
::================ BackUpAllMyDatabases.cmd ============= END
--=================================================BackUpAllMyDatabases.sql start
DECLARE @DBName varchar(255)
DECLARE @DATABASES_Fetch int
DECLARE DATABASES_CURSOR CURSOR FOR
select
DATABASE_NAME = db_name(s_mf.database_id)
from
sys.master_files s_mf
where
-- ONLINE
s_mf.state = 0
-- Only look at databases to which we have access
and has_dbaccess(db_name(s_mf.database_id)) = 1
-- Not master, tempdb or model
--and db_name(s_mf.database_id) not in ('Master','tempdb','model')
group by s_mf.database_id
order by 1
OPEN DATABASES_CURSOR
FETCH NEXT FROM DATABASES_CURSOR INTO @DBName
WHILE @@FETCH_STATUS = 0
BEGIN
declare @DBFileName varchar(256)
set @DBFileName = @DbName + '_' + replace(convert(varchar, getdate(), 112), '-', '.') + '.bak'
--REMEMBER TO PUT HERE THE TRAILING \ FOR THE DIRECTORY !!!
exec ('BACKUP DATABASE [' + @DBName + '] TO DISK = N''D:\DATA\BACKUPS\' +
@DBFileName + ''' WITH NOFORMAT, INIT, NAME = N''' +
@DBName + '-Full Database Backup'', SKIP, NOREWIND, NOUNLOAD, STATS = 100')
FETCH NEXT FROM DATABASES_CURSOR INTO @DBName
END
CLOSE DATABASES_CURSOR
DEALLOCATE DATABASES_CURSOR
--BackUpAllMyDatabases==========================end
--======================RestoreDbFromFile.sql start
-- Restore database from file
-----------------------------------------------------------------
use master
go
declare @backupFileName varchar(100), @restoreDirectory varchar(100),
@databaseDataFilename varchar(100), @databaseLogFilename varchar(100),
@databaseDataFile varchar(100), @databaseLogFile varchar(100),
@databaseName varchar(100), @execSql nvarchar(1000)
-- Set the name of the database to restore
set @databaseName = 'ReplaceDataBaseNameHere'
-- Set the path to the directory containing the database backup
set @restoreDirectory = 'ReplaceRestoreDirectoryHere' -- such as 'c:\temp\'
-- Create the backup file name based on the restore directory, the database name and today's date
@backupFileName = @restoreDirectory + @databaseName + '-' + replace(convert(varchar, getdate(), 110), '-', '.') + '.bak'
-- set @backupFileName = 'D:\DATA\BACKUPS\server.poc_test_fbu_20081016.bak'
-- Get the data file and its path
select @databaseDataFile = rtrim([Name]),
@databaseDataFilename = rtrim([Filename])
from master.dbo.sysaltfiles as files
inner join
master.dbo.sysfilegroups as groups
on
files.groupID = groups.groupID
where DBID = (
select dbid
from master.dbo.sysdatabases
where [Name] = @databaseName
)
-- Get the log file and its path
select @databaseLogFile = rtrim([Name]),
@databaseLogFilename = rtrim([Filename])
from master.dbo.sysaltfiles as files
where DBID = (
select dbid
from master.dbo.sysdatabases
where [Name] = @databaseName
)
and
groupID = 0
print 'Killing active connections to the "' + @databaseName + '" database'
-- Create the sql to kill the active database connections
set @execSql = ''
select @execSql = @execSql + 'kill ' + convert(char(10), spid) + ' '
from master.dbo.sysprocesses
where db_name(dbid) = @databaseName
and
DBID <> 0
and
spid <> @@spid
exec (@execSql)
print 'Restoring "' + @databaseName + '" database from "' + @backupFileName + '" with '
print ' data file "' + @databaseDataFile + '" located at "' + @databaseDataFilename + '"'
print ' log file "' + @databaseLogFile + '" located at "' + @databaseLogFilename + '"'
set @execSql = '
restore database [' + @databaseName + ']
from disk = ''' + @backupFileName + '''
with
file = 1,
move ''' + @databaseDataFile + ''' to ' + '''' + @databaseDataFilename + ''',
move ''' + @databaseLogFile + ''' to ' + '''' + @databaseLogFilename + ''',
norewind,
nounload,
replace'
exec sp_executesql @execSql
exec('use ' + @databaseName)
go
-- If needed, restore the database user associated with the database
/*
exec sp_revokedbaccess 'myDBUser'
go
exec sp_grantdbaccess 'myDBUser', 'myDBUser'
go
exec sp_addrolemember 'db_owner', 'myDBUser'
go
use master
go
*/
--======================RestoreDbFromFile.sql
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Mechanisms for tracking DB schema changes What are the best methods for tracking and/or automating DB schema changes? Our team uses Subversion for version control and we've been able to automate some of our tasks this way (pushing builds up to a staging server, deploying tested code to a production server) but we're still doing database updates manually. I would like to find or create a solution that allows us to work efficiently across servers with different environments while continuing to use Subversion as a backend through which code and DB updates are pushed around to various servers.
Many popular software packages include auto-update scripts which detect DB version and apply the necessary changes. Is this the best way to do this even on a larger scale (across multiple projects and sometimes multiple environments and languages)? If so, is there any existing code out there that simplifies the process or is it best just to roll our own solution? Has anyone implemented something similar before and integrated it into Subversion post-commit hooks, or is this a bad idea?
While a solution that supports multiple platforms would be preferable, we definitely need to support the Linux/Apache/MySQL/PHP stack as the majority of our work is on that platform.
A: Scott Ambler produces a great series of articles (and co-authored a book) on database refactoring, with the idea that you should essentially apply TDD principles and practices to maintaining your schema. You set up a series of structure and seed data unit tests for the database. Then, before you change anything, you modify/write tests to reflect that change.
We have been doing this for a while now and it seems to work. We wrote code to generate basic column name and datatype checks in a unit testing suite. We can rerun those tests anytime to verify that the database in the SVN checkout matches the live db the application is actually running.
As it turns out, developers also sometimes tweak their sandbox database and neglect to update the schema file in SVN. The code then depends on a db change that hasn't been checked in. That sort of bug can be maddeningly hard to pin down, but the test suite will pick it up right away. This is particularly nice if you have it built into a larger Continuous Integration plan.
A: Dump your schema into a file and add it to source control. Then a simple diff will show you what changed.
A: K. Scott Allen has a decent article or two on schema versioning, which uses the incremental update scripts/migrations concept referenced in other answers here; see http://odetocode.com/Blogs/scott/archive/2008/01/31/11710.aspx.
A: In the Rails world, there's the concept of migrations, scripts in which changes to the database are made in Ruby rather than a database-specific flavour of SQL. Your Ruby migration code ends up being converted into the DDL specific to your current database; this makes switching database platforms very easy.
For every change you make to the database, you write a new migration. Migrations typically have two methods: an "up" method in which the changes are applied and a "down" method in which the changes are undone. A single command brings the database up to date, and can also be used to bring the database to a specific version of the schema. In Rails, migrations are kept in their own directory in the project directory and get checked into version control just like any other project code.
This Oracle guide to Rails migrations covers migrations quite well.
Developers using other languages have looked at migrations and have implemented their own language-specific versions. I know of Ruckusing, a PHP migrations system that is modelled after Rails' migrations; it might be what you're looking for.
A: We use something similar to bcwoord to keep our database schemata synchronized across 5 different installations (production, staging and a few development installations), and backed up in version control, and it works pretty well. I'll elaborate a bit:
To synchronize the database structure, we have a single script, update.php, and a number of files numbered 1.sql, 2.sql, 3.sql, etc. The script uses one extra table to store the current version number of the database. The N.sql files are crafted by hand, to go from version (N-1) to version N of the database.
They can be used to add tables, add columns, migrate data from an old to a new column format then drop the column, insert "master" data rows such as user types, etc. Basically, it can do anything, and with proper data migration scripts you'll never lose data.
The update script works like this:
*
*Connect to the database.
*Make a backup of the current database (because stuff will go wrong) [mysqldump].
*Create bookkeeping table (called _meta) if it doesn't exist.
*Read current VERSION from _meta table. Assume 0 if not found.
*For all .sql files numbered higher than VERSION, execute them in order
*If one of the files produced an error: roll back to the backup
*Otherwise, update the version in the bookkeeping table to the highest .sql file executed.
Everything goes into source control, and every installation has a script to update to the latest version with a single script execution (calling update.php with the proper database password etc.). We SVN update staging and production environments via a script that automatically calls the database update script, so a code update comes with the necessary database updates.
We can also use the same script to recreate the entire database from scratch; we just drop and recreate the database, then run the script which will completely repopulate the database. We can also use the script to populate an empty database for automated testing.
It took only a few hours to set up this system, it's conceptually simple and everyone gets the version numbering scheme, and it has been invaluable in having the ability to move forward and evolving the database design, without having to communicate or manually execute the modifications on all databases.
Beware when pasting queries from phpMyAdmin though! Those generated queries usually include the database name, which you definitely don't want since it will break your scripts! Something like CREATE TABLE mydb.newtable(...) will fail if the database on the system is not called mydb. We created a pre-comment SVN hook that will disallow .sql files containing the mydb string, which is a sure sign that someone copy/pasted from phpMyAdmin without proper checking.
A: It's kind of low tech, and there might be a better solution out there, but you could just store your schema in an SQL script which can be run to create the database. I think you can execute a command to generate this script, but I don't know the command unfortunately.
Then, commit the script into source control along with the code that works on it. When you need to change the schema along with the code, the script can be checked in along with the code that requires the changed schema. Then, diffs on the script will indicate diffs on schema changes.
With this script, you could integrate it with DBUnit or some kind of build script, so it seems it could fit in with your already automated processes.
A: If you are using C#, have a look at Subsonic, a very useful ORM tool, but is also generates sql script to recreated your scheme and\or data. These scripts can then be put into source control.
http://subsonicproject.com/
A: We use a very simple but yet effective solution.
For new installs, we have a metadata.sql file in the repository which holds all the DB schema, then in the build process we use this file to generate the database.
For updates, we add the updates in the software hardcoded. We keep it hardcoded because we don't like solving problems before it really IS a problem, and this kind of thing didn't prove to be a problem so far.
So in our software we have something like this:
RegisterUpgrade(1, 'ALTER TABLE XX ADD XY CHAR(1) NOT NULL;');
This code will check if the database is in version 1 (which is stored in a table created automatically), if it is outdated, then the command is executed.
To update the metadata.sql in the repository, we run this upgrades locally and then extract the full database metadata.
The only thing that happens every so often, is to forget commiting the metadata.sql, but this isn't a major problem because its easy to test on the build process and also the only thing that could happen is to make a new install with an outdated database and upgraded it on the first use.
Also we don't support downgrades, but it is by design, if something breaks on an update, we restored the previous version and fix the update before trying again.
A: I've used the following database project structure in Visual Studio for several projects and it's worked pretty well:
Database
Change Scripts
0.PreDeploy.sql
1.SchemaChanges.sql
2.DataChanges.sql
3.Permissions.sql
Create Scripts
Sprocs
Functions
Views
Our build system then updates the database from one version to the next by executing the scripts in the following order:
1.PreDeploy.sql
2.SchemaChanges.sql
Contents of Create Scripts folder
2.DataChanges.sql
3.Permissions.sql
Each developer checks in their changes for a particular bug/feature by appending their code onto the end of each file. Once a major version is complete and branched in source control, the contents of the .sql files in the Change Scripts folder are deleted.
A: I create folders named after the build versions and put upgrade and downgrade scripts in there. For example, you could have the following folders: 1.0.0, 1.0.1 and 1.0.2. Each one contains the script that allows you to upgrade or downgrade your database between versions.
Should a client or customer call you with a problem with version 1.0.1 and you are using 1.0.2, bringing the database back to his version will not be a problem.
In your database, create a table called "schema" where you put in the current version of the database. Then writing a program that can upgrade or downgrade your database for you is easy.
Just like Joey said, if you are in a Rails world, use Migrations. :)
A: For my current PHP project we use the idea of rails migrations and we have a migrations directory in which we keep files title "migration_XX.sql" where XX is the number of the migration. Currently these files are created by hand as updates are made, but their creation could be easily modified.
Then we have a script called "Migration_watcher" which, as we are in pre-alpha, currently runs on every page load and checks whether there is a new migration_XX.sql file where XX is larger than the current migration version. If so it runs all migration_XX.sql files up to the largest number against the database and voila! schema changes are automated.
If you require the ability to revert the system would require a lot of tweaking, but it's simple and has been working very well for our fairly small team thus far.
A: I would recommend using Ant (cross platform) for the "scripting" side (since it can practically talk to any db out there via jdbc) and Subversion for the source repository.
Ant will allow you to "back up" your db to local files, before making changes.
*
*backup existing db schema to file via Ant
*version control to Subversion repository via Ant
*send new sql statements to db via Ant
A: Toad for MySQL has a function called schema compare that allows you to synchronise 2 databases. It is the best tool I have used so far.
A: I like the way how Yii handles database migrations. A migration is basically a PHP script implementing CDbMigration. CDbMigration defines an up method that contains the migration logic. It is also possible to implement a down method to support reversal of the migration. Alternatively, safeUp or safeDown can be used to make sure that the migration is done in the context of a transaction.
Yii's command-line tool yiic contains support to create and execute migrations. Migrations can be applied or reversed, either one by one or in a batch. Creating a migration results in code for a PHP class implementing CDbMigration, uniquely named based on a timestamp and a migration name specified by the user. All migrations that have been previously applied to the database are stored in a migration table.
For more information see the Database Migration article from the manual.
A: Try db-deploy - mainly a Java tool but works with php as well.
*
*http://dbdeploy.com/
*http://davedevelopment.co.uk/2008/04/14/how-to-simple-database-migrations-with-phing-and-dbdeploy.html
A: IMHO migrations do have a huge problem:
Upgrading from one version to another works fine, but doing a fresh install of a given version might take forever if you have hundreds of tables and a long history of changes (like we do).
Running the whole history of deltas since the baseline up to the current version (for hundreds of customers databases) might take a very long time.
A: My team scripts out all database changes, and commits those scripts to SVN, along with each release of the application. This allows for incremental changes of the database, without losing any data.
To go from one release to the next, you just need to run the set of change scripts, and your database is up-to-date, and you've still got all your data. It may not be the easiest method, but it definitely is effective.
A: The issue here is really making it easy for developers to script their own local changes into source control to share with the team. I've faced this problem for many years, and was inspired by the functionality of Visual Studio for Database professionals. If you want an open-source tool with the same features, try this: http://dbsourcetools.codeplex.com/
Have fun,
- Nathan.
A: If you are still looking for solutions : we are proposing a tool called neXtep designer. It is a database development environment with which you can put your whole database under version control. You work on a version controlled repository where every change can be tracked.
When you need to release an update, you can commit your components and the product will automatically generate the SQL upgrade script from the previous version. Of course, you can generate this SQL from any 2 versions.
Then you have many options : you can take those scripts and put them in your SVN with your app code so that it'll be deployed by your existing mechanism. Another option is to use the delivery mechanism of neXtep : scripts are exported in something called a "delivery package" (SQL scripts + XML descriptor), and an installer can understand this package and deploy it to a target server while ensuring structural consistency, dependency check, registering installed version, etc.
The product is GPL and is based on Eclipse so it runs on Linux, Mac and windows. It also support Oracle, MySQL and PostgreSQL at the moment (DB2 support is on the way). Have a look at the wiki where you will find more detailed information :
http://www.nextep-softwares.com/wiki
A: There is a command-line mysql-diff tool that compares database schemas, where schema can be a live database or SQL script on disk. It is good for the most schema migration tasks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "139"
} |
Q: Can I logically reorder columns in a table? If I'm adding a column to a table in Microsoft SQL Server, can I control where the column is displayed logically in queries?
I don't want to mess with the physical layout of columns on disk, but I would like to logically group columns together when possible so that tools like SQL Server Management Studio list the contents of the table in a convenient way.
I know that I can do this through SQL Management Studio by going into their "design" mode for tables and dragging the order of columns around, but I'd like to be able to do it in raw SQL so that I can perform the ordering scripted from the command line.
A: You can not do this programatically (in a safe way that is) without creating a new table.
What Enterprise Manager does when you commit a reordering is to create a new table, move the data and then delete the old table and rename the new table to the existing name.
If you want your columns in a particular order/grouping without altering their physical order, you can create a view which can be whatever you desire.
A: When Management Studio does it, it's creating a temporary table, copying everything across, dropping your original table and renaming the temporary table. There's no simple equivalent T-SQL statement.
If you don't fancy doing that, you could always create a view of the table with the columns in the order you'd like and use that?
Edit: beaten!
A: If I understand your question, you want to affect what columns are returned first, second, third, etc in existing queries, right?
If all of your queries are written with SELECT * FROM TABLE - then they will show up in the output as they are laid out in SQL.
If your queries are written with SELECT Field1, Field2 FROM TABLE - then the order they are laid out in SQL does not matter.
A: I think what everyone here is missing is that although not everyone has to deal with 10's, 20's, or 1000's instances of the same software system installed throughout the country and world, those of us that design commercially sold software do so. As a result, we expand systems over time, expand tables by adding fields as new capability is needed, and as those fields are identified do belong in an existing table, and as such, over a decade of expanding, growing, adding fields, etc to tables, and then having to work with those tables from design, to support, to sometimes digging into raw data/troubleshooting to debug new functionality bugs, it is incredibly aggravating to not have the primary information you want to see within the first handful of fields, when you may have tables with 30, 40, 50, or even 90 fields, and yes, in a strictly normalized database.
I've often wished I could do this, for this exact reason. But short of doing exactly what SQL does, building a Create Script for a new Table the way I want it, writing the Insert to it, then dropping all existing constraints, relationships, keys, index, etc etc from the existing table and renaming the "new" table back to the old name, and then reading all those keys, relationships, index, etc etc ....
It's not only tedious, time-consuming, but ... in five more years, it will need to happen again.
It's so close to worth that massive amount of work, however the point is, it won't be the last time we need this ability, since our systems will continue to grow, expand, and get fields in a wacked ordered driven by need/design additions.
A majority of developers think from a single system standpoint that serves a single company or very specific hard box market.
The "off-the-shelf" but significantly progressive designers and leaders of development in their market space will always have to deal with this problem, over and over, and would love a creative solution if anyone has one. This could easily save my company a dozen hours a week, just not having to scroll over, or remember where "that" field is in the source data table.
A: There is one way, but its only temporarily for the query itself. For example,
Lets say you have 5 tables.
Table is called T_Testing
FirstName, LastName, PhoneNumber, Email, and Member_ID
you want it to list their ID, then Last Name, then FirstName, then Phone then Email.
You can do it as per the Select.
Select Member_ID, LastName, FirstName, PhoneNumber, Email
From T_Testing
Other than that, if you just want the LastName to Show before first name for some reason, you can do it also as follows:
Select LastName, *
From T_Testing
The only thing you wanna be sure that you do is that the OrderBy or Where Function needs to be denoted as Table.Column if you are going to be using a Where or OrderBy
Example:
Select LastName, *
From T_Testing
Order By T_Testing.LastName Desc
I hope this helps, I figured it out because I needed to do this myself.
A: *
*Script your existing table to a query window.
*Run this script against a Test database (remove the Use statement)
*Use SSMS to make the column changes you need
*Click Generate Change Script (left most and bottommost icon on the
buttonbar, by default)
*Use this script against your real table
All the script really does is create a second table table with the desired column orders, copies all your data into it, drops the original table and then renames the secondary table to take its place. This does save you writing it yourself though should you want a deploy script.
A: It is not possible to change the order of the columns without recreating the whole table. If you have a few instances of the database only, you can use SSMS for this (Select the table and click "design").
In case you have too many instances for a manual process, you should try this script:
https://github.com/Epaminaidos/reorder-columns
A: It can be done using SQL, by modifying the system tables directly. For example, look here:
Alter table - Add new column in between
However, I would not recommend playing with system tables, unless it's absolutely necessary.
A: *
*Open your table in SSMS in design mode:
*Reorder your columns:
It is important to not save your change.
*Click the "Generate Change Script" button:
*Now a window will open that contains the script to apply this change:
Copy the text from the window.
In this instance, it generated the following code:
/* To prevent any potential data loss issues, you should review this script in detail before running it outside the context of the database designer.*/
BEGIN TRANSACTION
SET QUOTED_IDENTIFIER ON
SET ARITHABORT ON
SET NUMERIC_ROUNDABORT OFF
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
COMMIT
BEGIN TRANSACTION
GO
CREATE TABLE dbo.Tmp_MyTable
(
Id int NOT NULL,
Name nvarchar(30) NULL,
Country nvarchar(50) NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE dbo.Tmp_MyTable SET (LOCK_ESCALATION = TABLE)
GO
IF EXISTS(SELECT * FROM dbo.MyTable)
EXEC('INSERT INTO dbo.Tmp_MyTable (Id, Name, Country)
SELECT Id, Name, Country FROM dbo.MyTable WITH (HOLDLOCK TABLOCKX)')
GO
DROP TABLE dbo.MyTable
GO
EXECUTE sp_rename N'dbo.Tmp_MyTable', N'MyTable', 'OBJECT'
GO
COMMIT
As you can see, what it does is 1) create a new temporary table, 2) copy the data over to the temporary table, 3) delete the original table and 4) rename the temporary table to the original table's name.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "101"
} |
Q: How can I modify .xfdl files? (Update #1) The .XFDL file extension identifies XFDL Formatted Document files. These belong to the XML-based document and template formatting standard. This format is exactly like the XML file format however, contains a level of encryption for use in secure communications.
I know how to view XFDL files using a file viewer I found here. I can also modify and save these files by doing File:Save/Save As. I'd like, however, to modify these files on the fly. Any suggestions? Is this even possible?
Update #1: I have now successfully decoded and unziped a .xfdl into an XML file which I can then edit. Now, I am looking for a way to re-encode the modified XML file back into base64-gzip (using Ruby or the command line)
A: If the encoding is base64 then this is the solution I've stumbled upon on the web:
"Decoding XDFL files saved with 'encoding=base64'.
Files saved with:
application/vnd.xfdl;content-encoding="base64-gzip"
are simple base64-encoded gzip files. They can be easily restored to XML by first decoding and then unzipping them. This can be done as follows on Ubuntu:
sudo apt-get install uudeview
uudeview -i yourform.xfdl
gunzip -S "" < UNKNOWN.001 > yourform-unpacked.xfdl
The first command will install uudeview, a package that can decode base64, among others. You can skip this step once it is installed.
Assuming your form is saved as 'yourform.xfdl', the uudeview command will decode the contents as 'UNKNOWN.001', since the xfdl file doesn't contain a file name. The '-i' option makes uudeview uninteractive, remove that option for more control.
The last command gunzips the decoded file into a file named 'yourform-unpacked.xfdl'.
Another possible solution - here
Side Note: Block quoted < code > doesn't work for long strings of code
A: The only answer I can think of right now is - read the manual for uudeview.
As much as I would like to help you, I am not an expert in this area, so you'll have to wait for someone more knowledgable to come down here and help you.
Meanwhile I can give you links to some documents that might help you:
*
*UUDeview Home Page
*Using XDFLengine
*Gettting started with the XDFL Engine
Sorry if this doesn't help you.
A: You don't have to get out of Ruby to do this, can use the Base64 module in Ruby to encode the document like this:
irb(main):005:0> require 'base64'
=> true
irb(main):007:0> Base64.encode64("Hello World")
=> "SGVsbG8gV29ybGQ=\n"
irb(main):008:0> Base64.decode64("SGVsbG8gV29ybGQ=\n")
=> "Hello World"
And you can call gzip/gunzip using Kernel#system:
system("gzip foo.something")
system("gunzip foo.something.gz")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Folders or Projects in a Visual Studio Solution? When spliting a solution in to logical layers, when is it best to use a separate project over just grouping by a folder?
A: denny wrote:
I personally feel that if reusable code is split into projects it is simpler to use other places than if it is just in folders.
I really agree with this - if you can reuse it, it should be in a separate project. With that said, it's also very difficult to reuse effectively :)
Here at SO, we've tried to be very simple with three projects:
*
*MVC Web project (which does a nice job of separating your layers into folders by default)
*Database project for source control of our DB
*Unit tests against MVC models/controllers
I can't speak for everyone, but I'm happy with how simple we've kept it - really speeds the builds along!
A: Separating features into projects is often a YAGNI architecture optimization. How often have you reused those separate projects, really? If it's not a frequent occurrence, you're complicating your development, build, deployment, and maintenance for theoretical reuse.
I much prefer separating into folders (using appropriate namespaces) and refactoring to separate projects when you've got a real-life reuse use case.
A: I usually do a project for the GUI a project for the business logic a project for data access and a project for unit tests.
But sometimes it is prudent to have separation based upon services (if you are using a service oriented architecture) Such as Authentication, Sales, etc.
I guess the rule of thumb that I work off of is that if you can see it as a component that has a clear separation of concerns then a different project could be prudent. But I would think that folders versus projects could just be a preference or philosophy.
I personally feel that if reusable code is split into projects it is simpler to use other places than if it is just in folders.
A: By default, always just create new folder within the same project
*
*You will get single assembly (without additional ILMerge gymnastic)
*Easier to obfuscate (because you will have less public types and methods, ideally none at all)
Separating your source code into multiple projects makes only sense if you...
*
*Have some portions of the source code that are part of the project but not deployable by default or at all (unit tests, extra plugins etc.)
*More developers involved and you want to treat their work as consumable black box. (not very recommended)
*If you can clearly separate your project into isolated layers/modules and you want to make sure that they can't cross-consume internal members. (also not recommended because you will need to decide which aspect is the most important)
If you think that some portions of your source code could be reusable, still don't create it as a new project. Just wait until you will really want to reuse it in another solution and isolate it out of original project as needed. Programming is not a lego, reusing is usually very difficult and often won't happen as planned.
A: I really think it is better to split the project as well, but it all depends on the size of the project and the number of people working on it.
For larger projects, I have a projects for
*
*data access (models)
*services
*front end
*tests
I got the model from Rob Connery and his storefront application... seems to work really well.
mvc-storefront
A:
Separating your source code into
multiple projects makes only sense if
you...
... More developers involved
and you want to treat their work as
consumable black box. (not very
recommended) ...
Why isn't this recommended? I've found it a very useful way to manage an application with several devs working on different portions. Makes checkins much easier, mainly by virtually eliminating merges. Very rarely will two devs have to work on the same project at the same time.
A: If you do go for creating several projects, make sure everyone who adds code to the solution is fully aware of the intention of them and do everything you can to get them to understand the dependencies between the projects. If you have ever tried to sort out the mess when someone has gone and added references that shouldn't have been there and got away with it for weeks you will understand this point
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
} |
Q: XML Editing/Viewing Software What software is recommended for working with and editing large XML schemas? I'm looking for both Windows and Linux software (doesn't have to be cross platform, just want suggestions for both) that help with dealing with huge XML files.
A: I may be old fashioned, but I prefer my text editor.
I use emacs, and it has a fairly decent xml mode.
Most good text editors will have decent syntax hi-lighting and tag matching facilities. Your IDE might already do it (IntelliJ idea does, and I believe Eclipse does as well). Good text editors will be able to deal with huge files, but some text editors may not be able to handle them. How big are we talking about?
A: I agree that your text editor is probably your best bet. I do know some people who swear by XMLSpy, if you need something that's tailored specifically for dealing with XML files in a visual way. I bet you could find some F/OSS work-alikse but I'm not aware of any.
A: FirstObject XML Editor. http://www.firstobject.com/dn_editor.htm
Its free, written in C++, optimized for working with very large xml files.
While it is relatively limited in functionality, it can load 100MB+ unformatted files in seconds, indent them and locate specific elements using the tree view. By using the 'Refresh' option you can also synchronise the tree with the text view.
It's in the UNIX spirit of having a simple tool doing a specific job very well.
A: You need at least a decent text editor as a baseline, emacs with nxml mode as mentioned before is a very good choice. However as the schema becomes larger and larger you may lose the overview, especially when you author an XML Schema document which can be very verbose. You'll need some sort of visualization: XML Spy is ok, Oxygen is great but expensive, but as it turns out, on Windows, you have almost all needed features in XMLPad which is freeware.
When you start editing instance XML documents (and even editing XML Schemas) you need on the fly validation against a schema and if possible auto-completion of attributes and elements. Emacs only supports on the fly validation and auto-completion with a relax NG based schema (but any XSD can be converted to a relax NG schema).
If you have any choice in the matter, consider using Relax NG as your schema syntax, it is much more readable and maintainable.
A: I work a lot with XML, and have found Oxygen to be a great editor. It's cross-platform and has a graphical schema editor, but since I use DTDs and not schemas, I can't vouch for the schema editor's quality. The rest of the editing package (such as the XML editor and XSLT debugger) is solid, so it could be worth a try.
A: Altova's XMLSpy is probably the best available. It offers different views of your data/schemas, XPath tools and produces good diagrams, among other things. It does cost quite a bit though. It's a mature product, so you don't tend to run into limitations as quickly as you do with some other tools.
Liquid XML is a pretty good, but relatively new alternative. It's a nice app to use and there's even a free version available! This is a tool worth keeping an eye on.
Both of these products have a handy feature which produces sample XML files based on your schema.
In contrast, Oracle's JDeveloper (based on Borland Jbuilder, I believe) tries to provide a decent schema editor, but falls short in that it sometimes produces invalid schema files. I stopped using it soon after noticing this.
I highly recommend checking out IBM's XML Schema Quality Checker. This command line tool validates your schema against WC3's XML Schema language. This is a good idea even if you've built your schema using another tool.
A: I use nxml-mode in GNU Emacs for editing xml, including very big files. And i use it for a long time - it quick, provide on-the-fly validation of xml , and provide completion functionality for tag & attributes names
A: The oXygen XML Editor a great IDE for Windows, bit expensive tho.
A: Altova's XML Spy is a great editor, but not necesarily the cheapest option out there.
A: I highly recommend Stylus Studio if you have any need for a long term broadly capable XML IDE. I've used it mostly for XSLT development but it supports development of almost everything XML related you would want to do. It's Windows only (very annoying).
A: I am using Cooktop (also available on tucows), and I'm very happy about XPath testing feature.
*
*Cooktop is an editor and development environment for XML, DTD, and XSLT documents
*Cooktop is a Windows application
*Best of all, it's free!
Features
*
*Color-coded XML, DTD, and XSLT editing
*Check well-formedness and validate
*Stylesheet testing with almost any XSLT engine
*XPATH testing
*Customizable "Code Bits" library
*XML formatting via Tidy
*Small download, small footprint
A: Open source XML editors examined - it is a little bit outdated though.
A: +1 for XML Spy, I've used both the stand alone product and the visual studio plugin, and I've been impressed.
In terms of FOSS, I use Notepad++
A: Recently I was editing XSLT files with Eclipse but for some reason Eclipse wouldn't do any auto-completion anymore. So I switched to Emacs's brilliant nxml-mode, and I'm not sure I'm going back. You get auto-completion that's really easy to use, and it's very fast. The only glitch is that you must provide a RELAX NG version of your document's schema, but there are tools out there that generate one for you from your DTD or Schema.
Check out http://www.xmlhack.com/read.php?item=2061 for more.
For non-free software, I second the recommendations for OxygenXML.
A: For Windows, I found Microsoft's own free XML Notepad to be a great simple to use editor with a nice selection of features. Used it for both reviewing my XML output when developing and editing broken iTunes' libraries. ;)
Requires .net 2.0
A: I use Notepad++ as my editor. You can also add plugins for dealing with XML specifically.
A: XML Copy Editor - Windows and Linux
Fast, free, and supports XML schema validation.
Official Website
http://xml-copy-editor.sourceforge.net/
How to install in Ubuntu
http://ubuntuforums.org/showthread.php?t=1640003
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: What good technology podcasts are out there? Yes, Podcasts, those nice little Audiobooks I can listen to on the way to work. With the current amount of Podcasts, it's like searching a needle in a haystack, except that the haystack happens to be the Internet and is filled with too many of these "Hot new Gadgets" stuff :(
Now, even though I am mainly a .NET developer nowadays, maybe anyone knows some good Podcasts from people regarding the whole software lifecycle? Unit Testing, Continous Integration, Documentation, Deployment...
So - what are you guys and gals listening to?
Please note that the categorizations are somewhat subjective and may not be 100% accurate as many podcasts cover several areas. Categorization is made against what is considered the "main" area.
General Software Engineering / Productivity
*
*[Stack Overflow ] 1(inactive, but still a good listen)
*TekPub (Requires Paid Subscription)
*Software Engineering Radio
*43 Folders
*Perspectives
*Dr. Dobb's (now a video feed)
*The Pragmatic Podcast (Inactive)
*IT Matters
*Agile Toolkit Podcast
*The Stack Trace (Inactive)
*Parleys
*Techzing
*The Startup Success Podcast
*Berkeley CS class lectures
*FLOSS Weekly
*This Developer's Life
.NET / Visual Studio / Microsoft
*
*Herding Code
*Hanselminutes
*.NET Rocks!
*Deep Fried Bytes
*Alt.Net Podcast (inactive)
*Polymorphic Podcast (inconsistent)
*Sparkling Client (The Silverlight Podcast)
*dnrTV!
*Spaghetti Code
*ASP.NET Podcast
*Channel 9
*Radio TFS
*PowerScripting Podcast
*The Thirsty Developer
*Elegant Code (inactive)
*ConnectedShow
*Crafty Coders
*Coding QA
jQuery
*
*yayQuery
*The official jQuery podcast
Java / Groovy
*
*The Java Posse
*Grails Podcast
*Java Technology Insider
*Basement Coders
Ruby / Rails
*
*Railscasts
*Rails Envy
*The Ruby on Rails Podcast
*Rubiverse
*Ruby5
Web Design / JavaScript / Ajax
*
*WebDevRadio
*Boagworld
*The Rissington podcast
*Ajaxian
*YUI Theater
Unix / Linux / Mac / iPhone
*
*Mac Developer Network
*Hacker Public Radio
*Linux Outlaws
*Mac OS Ken
*LugRadio Linux radio show (Inactive)
*The Linux Action Show!
*Linux Kernel Mailing List (LKML) Summary Podcast
*Stanford's iPhone programming class
*Advanced iPhone Development Course - Madison Area Technical College
*WWDC 2010 Session Videos (requires Apple Developer registration)
System Administration, Security or Infrastructure
*
*RunAs Radio
*Security Now!
*Crypto-Gram Security Podcast
*Hak5
*VMWare VMTN
*Windows Weekly
*PaulDotCom Security
*The Register - Semi-Coherent Computing
*FeatherCast
General Tech / Business
*
*Tekzilla
*This Week in Tech
*The Guardian Tech Weekly
*PCMag Radio Podcast (Inactive)
*Entrepreneurship Corner
*Manager Tools
Other / Misc. / Podcast Networks
*
*IT Conversations
*Retrobits Podcast
*No Agenda Netcast
*Cranky Geeks
*The Command Line
*Freelance Radio
*IBM developerWorks
*The Register - Open Season
*Drunk and Retired
*Technometria
*Sod This
*Radio4Nerds
*Hacker Medley
A: I like
General Software
*
*Stackoverflow (perhaps too obvious)
*Deep Fried Bytes
*Hanselminutes
*Software Engineering Radio (via Brenden)
*Herding Code
Dot Net
*
*Alt.NET Podcast
*Polymorphic Podcast
Productivity
*
*43 Folders
A: Also make sure you don't miss the dnrTV webcast show that Carl Franklin (the man behind .NET rocks) publishes. Even if it's a not a podcast and requires a more attention while watching it it's really informative and if you're into .NET and Microsoft related techniques you'll learn a lot.
A: I can second Jon Galloway's mention of Herding Code, and since I have absolutely nothing to do with the podcast, with nothing to gain, my opinion may be more valuable than his :-).
There are only a few there as it's relatively new, but they are jam packed with good stuff that is very relevant to today's programming paradigms and strategies.
I also love the smooth format they've got going since 4 guys all giving input on a topic can make for a very jerky conversation with all (most?) of them dialed in, but whether it's the post editing or just a good format, either way it comes across as a very comfortable listening experience to the end user. Keep it up guys!
Hope that helps,
Rob G
A: It does not seem like this one was mentioned yet.
http://thecommandline.net/ --
"Exploring the rough edges where technology, society and public policy meet."
He does a weekly News show and a weekly topics show.
From the website,
Endorsement:
"Thoughtful, informative, and deep, a real plunge into the geeky end of the news-pool. There's great analysis and rumination, as well as detailed explanations of important security issues with common OSes and so on." -- Cory Doctorow
A: Besides Stack Overflow of course, here are mine.
*
*Many have already mentioned Hanselminutes.
*Some have already mentioned .NET Rocks!
*Not quite as many have mentioned RunAs Radio.
I can't believe the size of some of these lists. With podcasts, I like to keep the list short and the quality high. As such, I tend to skip the aggregates like ITConversations et. al.
A: Not hardcore technology but I really enjoy Drunk and Retired. It's like you're talking to your programmer buddy mixed in with life stuff.
A: Extending on what Mike Powell has to say, I am actually a big fan of almost all of the podcasts at http://www.twit.tv. Most of the content is watered down a bit, but some of the speakers are top notch thinkers - especially on "This Week in Tech", the flagship program.
Oh - and Car Talk on NPR but those guys hardly EVER get into the SDLC!
A: *
*FLOSS Weekly
*
*Pragmatic Podcasts
*Rails Envy
*Webdev Radio
A: If you started out on an 8 bit machine, don't forget your roots:
The Retrobits Podcast
A: A good weekly update to the Ruby on Rails world: Rails Envy.
The thestacktrace is good general programming podcast, which covers every thing from git to Scala.
A: My list:
Hanselminutes
.NET Rocks!
Herding Code
Deep Fried Bytes
Spaghetti Code
The Sparkling Client
Plumbers @ Work
Polymorphic Podcast
ALT.NET Podcast
ASP.NET Podcast
Radio TFS
PowerScripting Podcast
Software Engineering Radio
stackoverflow Podcast
The Thirsty Developer
ThoughtWorks - IT Matters Podcast
Agile Toolkit Podcast
Ajaxian Podcast
Pragmatic Podcasts
Channel 9 Audio Feed
EDIT: Missed one:
Elegant Code Cast
A: This one's not specifically about development, but Security Now from Steve Gibson and Leo Laporte is an excellent discussion of security issues. I think it's a must-listen for just about any computer user who's concerned about security, and especially for web developers who are responsible both for the security of their site and at least partially responsible for their users' security.
A: If you're interested in Linux, Linux Action Show is a wonderful podcast !
It's about Linux news, distributions and softwares releases and also Linux based hardware testing (like drobo, Amazon Kindle and so on).
It's very good quality and the hosts, Brian and Chris, sounds amazing.
It's my number one podcast !
Also, I've just discovered that IBM offers some developer podcasts which seems very interesting, some are from Erich Gamma by the way. Of course, it's a little bit more Java and Eclipse oriented (It's IBM).
A: http://herdingcode.com/
A: I love FLOSS Weekly. Another Twit Podcast where Leo and Randal Schwartz interview open source geeks. My favorite was their interview with Dan Ingalls (Smalltalk/Squeak fame). I also enjoyed their interview of Richard Hipp (SQLite).
A: Plus one for the following:
*
*The Java Posse
*Software Engineering Radio
*The Grails Podcast
A: My favorite is Manager Tools. Technically it is a business podcast, but very valuable for programmers or other individual contributors working in corporate environments. Been listening for 3 years, new to StackOverflow
-- Mike
A: Linux Outlaws are pretty good. They discuss GNU/Linux distros, software and IT news.
A: Am I going to be downmodded for suggesting that the Stack Overflow podcast is hilariously bad as a podcast? Anywho, you can find it, and a number of not-bad podcasts at
itconversations.com.
As this question asked for a "good" rather than "exhaustive" list, then this is obviously just my opinion. My opinion bounces between .NET and Java and just geek. And obvious omissions would reflect my opinion on "good". (Ahem, DNR.)
The rest of these are easily found by doing a podcast search in iTunes, or just googling (I'll do some repeating here to condense the list):
*
*Buzz Out Loud (General Consumer Tech, Daily)
*This Week in Tech (aka TWiT. Weekly Consumer Tech.)
*The Java Posse (Weekly.)
*Google Developer Podcast (which went long fallow, but seems to be coming
back, possible renamed as the Google Code Review. Schedule uncertain, technologies vary.)
*Hanselminutes (Usually, but not always, .NET-related)
*MacBreak Weekly (The Mac version of TWiT)
*Polymorphic Podcast (All .NET, usually ASP.NET)
*Pixel8ed (All .NET, focused on UI. Same guy who does Polymorphic Podcast)
*tech5 (Consumer Tech. Mostly a fun waste of 5 minutes because Dvorak is so... Spolsky.)
A: Elegant Code Cast
A: In the Stack Overflow podcast SE-radio was mentioned. It's pretty in depth.
Also if you are an aspiring JavaScript developer, the Douglas Crockford "The JavaScript Programming Language" and "Advanced JavaScript" talks on YUI Developer Theatre are excellent. There are a few other gems on the podcast too.
A: I listen to the javaposse regularly, they cover mostly Java, but not solely.
A: *
*JavaPosse If you want to hear all that you (n)ever wanted to know about closures (7/2010 - This is actually a good podcast, but now it's all you (n)ever wanted to know about apple & android)
*.NET Rocks For when you want to hear the billionth interview about databinding controls in the trenches during the transition from VB6 to VB.NET
*Stack Overflow You really do want to hear a guy who doesn't know C debate a guy who pretends to have invented it, or something, or maybe just listen for spoilers to wallee
*Security Now! You want to listen to someone who thinks he's the most ingenious security architect in the world, because he writes EVERYTHING IN ASSEMBLER (no, I'm not kidding), while overlooking the obvious solutions to problems that have existed for years. Please don't listen to this thinking it's good
*Yahoo! Dev Network - I haven't seen a lot of good stuff here, but Crockford's talks on advanced JavaScript are wonderful
A: Suggestion: If you post each of your recommended podcasts as a separate answer then people can vote for your "answer".
BTW, Joel discussed this on the Stack Overflow Podcast (can't find the reference in the transcript Wiki) and suggested something like:
- Post your suggested "favorite" (tech podcast, in this case) as a question: "Do you like < > podcast and tag it with "technology podcast".
The beauty of this is that we get a simple poll. Yes, it would be nice to actually have a poll but that's not yet a Stack Overflow feature.
A: I took all of the podcasts from the answers scoring 5 or better (and those in the original question) and added them to an aggregated page on Cullect.com:
http://www.cullect.com/StackOverflow-Recommended-Podcasts
It provides a handy way to get a glimpse of these podcasts as well as a way to preview them if you're in a hurry or don't want to wade through all of the duplicates in the answers. I'm currently set up as the only curator of the "cullection", but if someone else wants to help keep it adjusted as the answers change, let me know.
A: I am pretty much hooked to:
*
*The Java Posse
*Software Engineering Radio
A: I'll add Crypto-Gram Security Podcast. Basically, Dan Henage reading Bruce Schneier newsletter Crypto-Gram.
Most of the other podcasts I listen to have been mentioned (TwiT, Security Now!, Cranky Geeks).
my 2c
A: Technometria
A: Cranky Geeks
A: Here is my list:
*
*The Java Posse - a must for Java programmers
*FLOSS Weekly - On open source projects, usually very geekish, but Leo Laporte is very funny
*Stack Overflow - I believe most of the readers know what I'm talking about...
*Software Engineering Radio - Very interesting interviews on various aspects
*FeatherCast - Seldom updated, interviews on Apache projects
*Parleys.com - Lectures served as podcast, highly biased to the Spring project
Not strictly technical, but highly recommended
*
*Entrepreneurial Thought Leaders - Lectures who built companies or other endeavours. Very interesting.
A: The MDN Show for General Macintosh business topics.
cocoaFusion and coreInt for in-depth Cocoa topics.
A: I recently stumbled across a new podcast named Hacker Medley. It's a short (~15 min) podcast with Nat Friedman and Alex Gravely. I found the first 3 episodes quite entertaining!
A: The Google Developer Podcast is good.
A: Brad's list is pretty good. I also listen to:
*
*Sparkling Client (Silverlight specific)
*Jon Udell's Perspectives series
*Herding Code (shameless plug for a podcast I put on
with Kevin Dente, Scott "lazycoder" Koon, and K. Scott
Allen. We recently interviewed Jeff Atwood about
Stack Overflow, discussing both how the site is
designed and the technology behind it.
A: The way I understand the question, you are asking for developer centric podcast. My personal number one is Late Night Cocoa from the Mac Developer Network followed by Mac Developer Roundtable. Although I agree that every developer should probably listen to Steve Gibson's Security Now! (with Leo Laporte's TWiT network).
For general tech stuff, check out other TWiT podcasts: This week in Tech, MacBreak Weekly, MacBreack Tech (with PixelCorps), Windows Weekly and FLOSS Weekly
On a side note: relevant to some developers who think about becoming a Micro-ISV in the Apple Universe: MacSB - Mac Software Business
A: I found this on a similar discussion, I think it was at Reddit:
UC Berkeley Webcast
I found it most useful, since it podcasts entire classes from Berkley courses such as Operating Systems and System Programming, The Structure and Interpretation of Computer Programs, Data Structures and Programming Methodology, among others.
A: Almost all of my favorite podcasts have already been mentioned but not the No 1. Do yourself a favor and listen to the best podcast ever, Linux radio show - LugRadio.
A: If you are into web design and website creation then I recommend Boagworld and also The Rissington podcast even if you are not.
A: The Stack Overflow podcast is the reason I'm now here. Jeff, unfortunately, is a poor project manager in terms of managing expectations and setting timelines -- yet the beta has arrived, and it's pretty decent! The .NET world is alien to me, so I've enjoyed the Stack Overflow podcast.
This Week in Tech is another podcast I listen to regularly. Unfortunately, I feel that none of the panelists other than Leo Laporte does any homework prior to the show, so many of the opinions (especially John C. Dvorak's) are uninformed.
I recently started listening to IT Conversations podcasts, and I got enough good information that I donated. The selection is mixed, but I really like talks from various conferences that I was unable to attend.
Thanks to other people who responded with links to other podcasts I haven't heard of. I'm a newbie, so I can't bump up scores yet.
A: This isn't necessarily something you can pop on your iPod and just chill to, but Diggnation is a hillarious video podcast with Kevin Rose and Alex Albrecht.
They talk about "some of the top stories on the user-submitted news site digg.com". This doesn't really have much in the way of software development (though sometimes a story pops up with that), but is great for entertainment value.
A: Not a technology podcast, but I really have to mention FreelanceRadio. A really great and sometimes hilarious resource. I'm listening to them in the morning, on the way to work. And sometimes feel really stupid just giggling by myself :P
A: I listen to The Guardian's TechWeekly, it's very informed for being done by journalists for a mainstream newspaper. Well produced and up to date. Has a focus on Britain and Europe.
A: My list is pretty similar to the rest -
TWIT, MBW, .NET Rocks, Hanselminutes, Polymorphic Podcast and specifically for Mac developers the Mac developer network has some a couple of good podcasts
A: I do enjoy all the podcasts from the TWIT network, though FLOSS Weekly and Security Now are my favorite "techie" podcasts.
I actually have never heard the Stack Overflow podcast, but will definitely be giving it a try after seeing all the recommendations here.
Also, I believe that Alex Lindsay (of the Pixel Corps, and frequently on Macbreak Weekly on TWIT) will be starting a very technical podcast on Mac development. I'm looking forward to this, as I've been primary a Java programmer, and am interested in learning Xcode and Obj-C.
A: My favourites are:
*
*Stack Overflow
*TWiT
*Security Now
I like listening to John C. Dvorak on TWiT, though I've never tried his other podcasts. He really knows his stuff and is frequently funny, but sometimes he's just an annoying old grump.
I used to listen to PaulDotCom Security Weekly, but they talk an awful lot about penetration testing and not so much about other aspects of computer security.
A: Stack Overflow
This Week in Tech
Security Now
As I learn more about programming I'll add more to my list.
Adding 43 folders now.
A: Top on my list are:
*
*Software Engineering Radio
*Java Posse
Sometimes I also listen to:
The ASP.NET podcast
I keep an eye on iTunes U as some courses have the perfect price (free) from top-notch Universities around the world. E.g. Computer Language Engineering from MIT.
A: It's worth subscribing to the Google Tech Talk YouTube channel. It's a video podcast with a bunch of really interesting, wide-ranging talks given to Google but (usually) outside speakers.
Past presenters include Linus Torvals, Guido van Rossum, Merlin Mann and Larry Wall. The video is usually just the slides so (depending on the speaker) you might not need to watch.
A: Java Technology Insider is what I found when I went looking for a Java equivalent of .NET Rocks! The interviewer is an enthusiastic amateur, and the guests are usually good.
A: I've just started listening to the irreverent Sod This podcast series, hosted by Gary Short and Oliver Sturm of DevExpress. They are fairly entertaining and mildly educational with a guest slot, slightly sweary though.
A: My favorites:
*
*thirsty developer
*pixel8d
*stackoverflow
*dotnetrocks
*alt.net podcast
*codecast
*hanselminutes
cheers
A: My top 3:
This Week in Tech (Leo Laporte, et. al)
Security Now (Leo Laporte + Steve Gibson)
Windows Weekly (Leo Laporte + Paul Thurrott)
A: The Connected Show covers new Microsoft Technologies and other interesting topics for the developer community.
A: LKML Summary podcast
A: I am the creator of Connected Show (http://www.ConnectedShow.com) and really want to thank this thread for posting us in the list. We are new and would love to get more listeners and more feedback.
A: Haven't seen the Security Catalyst for security. I used to prefer this over the one leo laporte does when I acutally had time to listen to such things.
Boagworld was an ok one for basic web design/dev stuff.
A: PHP
*
*PHP Abstract
*ZendCon Sessions
A: Misfit Geek Podcast (formerly JoeOn.NET)
By Joe Stagner.
A: Don't forget The Flex Show.
A: I find the PC Pro Podcast a good weekly round up, which complements the monthly paper magazine in the UK, for PCs, Windows software and gadgets.
This Week in Testing can also be informative and fun, as a round up of blogs and opinions from the world of automated tests.
Some of the US based shows like the TWiT network are too advert heavy for my taste.
I also recently found the Ted Talks (in video, or audio only) on iTunes, quite a few of which are technical or speculative with a very high quality of speaker.
Edit: Added The Guardian's Tech Weekly
A: 37signals now has a podcast with Jason Fried and David Heinemeier Hansson (creator of Ruby on Rails)
A: Haven't found Code cast in your list.
A: Suggest someone with the reputation to do it revise this question to say, "What good technology podcasts are out there?"
I've got all kinds of audio fiction I could recommend, but then this question really runs off into the weeds.
A: All of the tech podcasts I listen to have been mentioned, but as long as we're discussing video I'd like to mention Hak.5. It is more focused on using existing programs rather than coding, but it has some good hardware segments, and it can often be an excellent source of inspiration.
A: I've been happy with Stack Overflow.
I listen to / watch a few others:
*
*No Agenda
*This Week In Tech
*Cranky Geeks
But the constant MS/Google/Apple/Yahoo fluff of these is
getting really old.
I've listened to a couple Hanselminutes and might start
listening more regularly.
I'd like to find some that deal with actual software
engineering issues and not just "tech gossip".
A:
Brian Deacon wrote:
Dvorak is so... Spolsky.
I can't describe why, but I agree.
A: My favorites are:
*
*Hanselminutes
*.NET Rocks
*StackOverflow
*SoftwareEngeneeringRadio
TWiT and CrankyGeeks I listen to if I want a laugh or get mad, they are horrible.
A: It's not software, but I frequently watch the Tekzilla podcasts. Love me some Veronica Belmont / Patrick Norton!
Also, all of the others already mentioned - Stack Overflow, TWiT, etc.
A: *
*Podcasts from Dr.Dobbs Journal
*ITConversations
*SE-Radio
*Channel 9
A: Something I didn't see mentioned is PCMag Radio. That's a more consumer tech-oriented show, but they do geek out fairly often, and the chatter is always interesting.
A: In addition to many of the other great ones listed, here are a couple of others for specific technologies that I regularly listen to:
*
*This Week in Django
*VMware Communities Roundtable
A: Over this summer I've enjoyed:
*
*StackOverflow
*SERadio - sometimes this feels too enterprise-y for me, but it's definitely the most technical, and the European (German?) hosts are a hoot.
*Hanselminutes and DNR - some aspects of these shows get annoying, but they frequently have interesting guests talking about interesting things, which is where the money is.
I echo the sentiment about the difference between tech gossip (TWiT, Diggnation, etc) and software development podcasts; while the former can sometimes be entertaining, I've found they tend towards the audio equivalent of Digg rather than Hacker News, programming.reddit, or, hopefully, StackOverflow.
I'll be checking out the other suggestions people gave.
A: I enjoy both Security Now and Windows Weekly, both a part of the TWiT network. You may want to check out the TWiT network, since they have a variety of tech related podcasts.
Also, as seems common here, Hanselminutes is pretty good.
A: What a great bunch of answers - Now I've got a number of podcasts to add to my listening list!
My current list is StackOverflow, TWiT and Mac OS Ken.
I tried to get into SERadio a few months ago but couldn't really engage myself with the podcast - Great introductory material, but I felt a lot of the shows were a bit 'beginner-y'.
A: There's tons of tech podcasts.
Some that I've subscribed to:
Daily shows
*
*WebbAlert
*Loaded
Non-Dailys
*
*Cranky Geeks
*DL.TV
And as you can see, I am using Miro to get them. (which is a nice X-platform vodcast catcher :-) )
Cheers
A: I have subscribed to quite a few podcasts but the ones I try and listen to weekly are:
*
*Se-Radio
*Hanselminutes
*.NET Rocks
*Polymorphic Podcast
*RunAsRadio
I have a 35 minute commute to work each morning (bus) and I like watching the Channel 9 feed on my zune.
A: I listen to and watch:
* this week in tech
* Cranky Geeks
* Security Now
* This Week in Media
* Tech5
A: My list includes:
.NET Rocks!
RunAs Radio
TWiT
Stack Overflow (but then again, we wouldn't be in Beta if we didn't)
Channel 9
Hanselminutes
Pretty much the same as everybody else. Just goes to show you why podcasts are important to developing your art.
A: My most regular listens are:
*
*Java Posse
*Software Engineering Radio
*Stack Overflow
*Agile Toolkit Podcast (intermittent)
Also, if you haven't heard the OOPSLA 2007 podcasts (keynote/main sessions recorded and podcasted) they're definitely worth a listen, although it's a fairly short run.
A: Two others not mentioned yet are The Register's Open Season (about the Open Source industry) and Semi Coherent Computing (which loosely is about enterprise hardware).
I'm not sure if Open Season has any more legs left in it though, since Ashley Vance (the apparent 'driver' of the podcast) has recently left El Reg for The New York Times. That said, the past year's worth of episodes are great and include some notable guests.
A: I am going out on a limb here to say that, don't get caught up in too many podcasts or blogs, but rather dive into technology/code and good tech books.
although +1 to;
*
*Thoughtworks - IT matters
*Software Engineering Radio
*Pragmatic podcasts
*Alt.Net podcasts
*Hanselminutes
and while not strictly technology
*
*Enterprise Thought Leaders from Stanford University, which often has speakers from fortune 500 and tech startups on how they made it.
A: My list includes:
Herding Code, Deep Fried Bytes, Polymorohic Podcast, Pixel8, .Net Rocks, Hanselminutes, Powerscripting podcast. Full list: http://rtipton.wordpress.com/podcasts/
A: Many of the above, plus TED talks and Shareware Radio. Links here: http://successfulsoftware.net/category/podcasts/
A: My favorite has been the Stack Overflow podcast just because it is reality based. ALT.NET has good content. Software Engineering Radio and Hanselminutes are informative. ThoughtWorks is marginal for me.
I'll try the others!
A: I extract DNRTV's audio and listen to it as a podcast (or have it run as a video on my Archos media player and just listen to it).
I don't have time to watch it for an hour. Usually I can follow the discussion without watching the video.
A: Most of the podcasts I listened to are already discussed above.
*
*.NET Rocks
*HanselMinutes
*RunAsRadio
*Mondays (for when you are bored with development stuffs)
*Herding Code
*Arcast (used to)
*AudibleAjax
*OpenWeb
There are some bits from OOPSLA that were interesting as well (not long running podcasts, but it's nice to hear).
A: Hacker Public Radio is an excellent source of podcasts on a broad range of technical topics.
A: The Web 2.0 Show is a podcast about emerging technologies commonly referred to as "Web 2.0", and is hosted by Josh Owens and Adam Stacoviak.
A: Check out our new podcast at Crafty Coders. It covers programming topics (mostly .net, but also other languages and topics).
A: The SitePoint Podcast hosts are Patrick O’Keefe (@ifroggy), Stephan Segraves (@ssegraves), Brad Williams (@williamsba), and Kevin Yank (@sentience). (@%twitter username%)
A: Buzz Out Load - CNET
A: I never miss the following :-
a) Hanselminutes
b) RunAsradio
c) The Thirsty Developers
d) DotnetRocks
e) DeepFriedBytes
f) Pixel8
A: I've been listening to Tanked Podcast. It's three friends that hang out and talk about tech, movies, video games, and they talk about the odd stuff that happens every week on the web. These guys are a blast and have way to much fun!
A: Thinkcode.tv
A: 5by5 Studios, hosted by Dan Benjamin, has some interesting podcasts. Some are specific to a technology (like ExpressionEngine and Ruby), some are focused on people (interviews), and others are discussion on the industry news.
*
*The Ruby Show née Rails Envy.
*The Pipeline, interview shows.
*The Dev Show, focused on web development, not just Ruby.
*The Conversation, discussion on current topics.
*The Big Web Show, more front-end oriented shows (like HTML5 and CSS3.)
Some (or all?) are recorded live. You can find them on iTunes as well.
A: Hack Radio
A: The OS News podcast is, unsurprisingly, the podcast for OS News. OS News is a site mostly dedicated to operating systems, but also covers a range of general technology, hardware and computing topics.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "526"
} |
Q: Learning to write a compiler Preferred languages: C/C++, Java, and Ruby.
I am looking for some helpful books/tutorials on how to write your own compiler simply for educational purposes. I am most familiar with C/C++, Java, and Ruby, so I prefer resources that involve one of those three, but any good resource is acceptable.
A:
"... Let's Build a Compiler ..."
I'd second http://compilers.iecc.com/crenshaw/ by @sasb. Forget buying more books for the moment.
Why? Tools & language.
The language required is Pascal and if I remember correctly is based on Turbo-Pascal. It just so happens if you go to http://www.freepascal.org/ and download the Pascal compiler all the examples work straight from the page ~ http://www.freepascal.org/download.var The beaut thing about Free Pascal is you can use it almost whatever processor or OS you can care for.
Once you have mastered the lessons then try the more advanced "Dragon Book" ~ http://en.wikipedia.org/wiki/Dragon_book
A: An easy way to create a compiler is to use bison and flex (or similar), build a tree (AST) and generate code in C. With generating C code being the most important step. By generating C code, your language will automatically work on all platforms that have a C compiler.
Generating C code is as easy as generating HTML (just use print, or equivalent), which in turn is much easier than writing a C parser or HTML parser.
A: You should check out Darius Bacon's "ichbins", which is a compiler for a small Lisp dialect, targeting C, in just over 6 pages of code. The advantage it has over most toy compilers is that the language is complete enough that the compiler is written in it. (The tarball also includes an interpreter to bootstrap the thing.)
There's more stuff about what I found useful in learning to write a compiler on my Ur-Scheme web page.
A: From the comp.compilers FAQ:
"Programming a Personal Computer" by Per Brinch Hansen
Prentice-Hall 1982 ISBN 0-13-730283-5
This unfortunately-titled book
explains the design and creation of a single-user programming environment
for micros, using a Pascal-like language called Edison. The author presents
all source code and explanations for the step-by-step implementation of an
Edison compiler and simple supporting operating system, all written in
Edison itself (except for a small supporting kernel written in a symbolic
assembler for PDP 11/23; the complete source can also be ordered for the IBM
PC).
The most interesting things about this book are: 1) its ability to
demonstrate how to create a complete, self-contained, self-maintaining,
useful compiler and operating system, and 2) the interesting discussion of
language design and specification problems and trade-offs in Chapter 2.
"Brinch Hansen on Pascal Compilers" by Per Brinch Hansen
Prentice-Hall 1985 ISBN 0-13-083098-4
Another light-on-theory
heavy-on-pragmatics here's-how-to-code-it book. The author presents the
design, implementation, and complete source code for a compiler and p-code
interpreter for Pascal- (Pascal "minus"), a Pascal subset with boolean and
integer types (but no characters, reals, subranged or enumerated types),
constant and variable definitions and array and record types (but no packed,
variant, set, pointer, nameless, renamed, or file types), expressions,
assignment statements, nested procedure definitions with value and variable
parameters, if statements, while statements, and begin-end blocks (but no
function definitions, procedural parameters, goto statements and labels,
case statements, repeat statements, for statements, and with statements).
The compiler and interpreter are written in Pascal* (Pascal "star"), a
Pascal subset extended with some Edison-style features for creating
software development systems. A Pascal* compiler for the IBM PC is sold by
the author, but it's easy to port the book's Pascal- compiler to any
convenient Pascal platform.
This book makes the design and implementation of a compiler look easy. I
particularly like the way the author is concerned with quality,
reliability, and testing. The compiler and interpreter can easily be used
as the basis for a more involved language or compiler project, especially
if you're pressed to quickly get something up and running.
A: This is a pretty vague question, I think; just because of the depth of the topic involved. A compiler can be decomposed into two separate parts, however; a top-half and a bottom-one. The top-half generally takes the source language and converts it into an intermediate representation, and the bottom half takes care of the platform specific code generation.
Nonetheless, one idea for an easy way to approach this topic (the one we used in my compilers class, at least) is to build the compiler in the two pieces described above. Specifically, you'll get a good idea of the entire process by just building the top-half.
Just doing the top half lets you get the experience of writing the lexical analyzer and the parser and go to generating some "code" (that intermediate representation I mentioned). So it will take your source program and convert it to another representation and do some optimization (if you want), which is the heart of a compiler. The bottom half will then take that intermediate representation and generate the bytes needed to run the program on a specific architecture. For example, the the bottom half will take your intermediate representation and generate a PE executable.
Some books on this topic that I found particularly helpful was Compilers Principles and Techniques (or the Dragon Book, due to the cute dragon on the cover). It's got some great theory and definitely covers Context-Free Grammars in a really accessible manner. Also, for building the lexical analyzer and parser, you'll probably use the *nix tools lex and yacc. And uninterestingly enough, the book called "lex and yacc" picked up where the Dragon Book left off for this part.
A: Python comes bundled with a python compiler written in Python. You can see the source code, and it includes all phases, from parsing, abstract syntax tree, emitting code, etc.
Hack it.
A: Sorry, it is in Spanish, but this is the bibliography of a course called "Compiladores e Intérpretes" (Compilers and Interpreters) in Argentina.
The course was from formal language theory to compiler construction, and these are the topics you need to build, at least, a simple compiler:
*
*Compilers Design in C. Allen I. Holub
Prentice-Hall. 1990.
*Compiladores. Teoría y Construcción. Sanchís
Llorca, F.J. , Galán Pascual, C. Editorial Paraninfo. 1988.
*Compiler Construction. Niklaus Wirth
Addison-Wesley. 1996.
*Lenguajes, Gramáticas y Autómatas. Un enfoque práctico. Pedro
Isasi Viñuela, Paloma Martínez
Fernández, Daniel Borrajo Millán. Addison-Wesley Iberoamericana
(España). 1997.
*The art of compiler design. Theory and practice. Thomas
Pittman, James Peters.
Prentice-Hall. 1992.
*Object-Oriented Compiler Construction. Jim Holmes.
Prentice Hall, Englewood
Cliffs, N.J. 1995
*Compiladores. Conceptos Fundamentales. B. Teufel, S.
Schmidt, T. Teufel.
Addison-Wesley Iberoamericana. 1995.
*Introduction to Automata Theory, Languages, and Computation.
John E. Hopcroft. Jeffref D. Ullman.
Addison-Wesley. 1979.
*Introduction to formal languages. György E. Révész.
Mc Graw Hill. 1983.
*Parsing Techniques. A Practical Guide. Dick Grune, Ceriel
Jacobs. Impreso por los
autores. 1995
http://www.cs.vu.nl/~dick/PTAPG.html
*Yacc: Yet Another Compiler-Compiler. Stephen
C. Johnson Computing Science
Technical Report Nº 32, 1975. Bell
Laboratories. Murray Hill, New
Jersey.
*Lex: A Lexical Analyzer Generator. M. E. Lesk, E. Schmidt. Computing Science Technical
Report Nº 39, 1975. Bell Laboratories.
Murray Hill, New Jersey.
*lex & yacc. John R. Levine, Tony Mason, Doug Brown.
O’Reilly & Associates. 1995.
*Elements of the theory of computation. Harry R. Lewis,
Christos H. Papadimitriou.
Segunda Edición. Prentice Hall. 1998.
*Un Algoritmo Eficiente para la Construcción del Grafo de Dependencia de Control.
Salvador V. Cavadini.
Trabajo Final de Grado para obtener el Título de Ingeniero en Computación.
Facultad de Matemática Aplicada.
U.C.S.E. 2001.
A: *
*This is a vast subject. Do not underestimate this point. And do not underestimate my point to not underestimate it.
*I hear the Dragon Book is a (the?) place to start, along with searching. :) Get better at searching, eventually it will be your life.
*Building your own programming language is absolutely a good exercise! But know that it will never be used for any practical purpose in the end. Exceptions to this are few and very far between.
A: I think Modern Compiler Implementation in ML is the best introductory compiler writing text. There's a Java version and a C version too, either of which might be more accessible given your languages background. The book packs a lot of useful basic material (scanning and parsing, semantic analysis, activation records, instruction selection, RISC and x86 native code generation) and various "advanced" topics (compiling OO and functional languages, polymorphism, garbage collection, optimization and single static assignment form) into relatively little space (~500 pages).
I prefer Modern Compiler Implementation to the Dragon book because Modern Compiler implementation surveys less of the field--instead it has really solid coverage of all the topics you would need to write a serious, decent compiler. After you work through this book you'll be ready to tackle research papers directly for more depth if you need it.
I must confess I have a serious soft spot for Niklaus Wirth's Compiler Construction. It is available online as a PDF. I find Wirth's programming aesthetic simply beautiful, however some people find his style too minimal (for example Wirth favors recursive descent parsers, but most CS courses focus on parser generator tools; Wirth's language designs are fairly conservative.) Compiler Construction is a very succinct distillation of Wirth's basic ideas, so whether you like his style or not or not, I highly recommend reading this book.
A: There's a lot of good answers here, so i thought I'd just add one more to the list:
I got a book called Project Oberon more than a decade ago, which has some very well written text on the compiler. The book really stands out in the sense that the source and explanations is very hands on and readable. The complete text (the 2005 edition) has been made available in pdf, so you can download right now. The compiler is discussed in chapter 12:
http://www.ethoberon.ethz.ch/WirthPubl/ProjectOberon.pdf
Niklaus Wirth, Jürg Gutknecht
(The treatment is not as extensive as his book on compilers)
I've read several books on compilers, and i can second the dragon book, time spent on this book is very worthwhile.
A: Not a book, but a technical paper and an enormously fun learning experience if you want to know more about compilers (and metacompilers)... This website walks you through building a completely self-contained compiler system that can compile itself and other languages:
Tutorial: Metacompilers Part 1
This is all based on an amazing little 10-page technical paper:
Val Schorre META II: A Syntax-Oriented Compiler Writing Language
from honest-to-god 1964. I learned how to build compilers from this back in 1970. There's a mind-blowing moment when you finally grok how the compiler can regenerate itself....
I know the website author from my college days, but I have nothing to do with the website.
A: I liked the Crenshaw tutorial too, because it makes it absolutely clear that a compiler is just another program that reads some input and writes some out put.
Read it.
Work it if you want, but then look at another reference on how bigger and more complete compilers are really written.
And read On Trusting Trust, to get a clue about the unobvious things that can be done in this domain.
A: If you are interested in writing a compiler for a functional language (rather than a procedural one) Simon Peyton-Jones and David Lester's "Implementing functional languages: a tutorial" is an excellent guide.
The conceptual basics of how functional evaluation works is guided by examples in a simple but powerful functional language called "Core". Additionally, each part of the Core language compiler is explained with code examples in Miranda (a pure functional language very similar to Haskell).
Several different types of compilers are described but even if you only follow the so-called template compiler for Core you will have an excellent understanding of what makes functional programming tick.
A: You can use BCEL by the Apache Software Foundation. With this tool you can generate assembler-like code, but it's Java with the BCEL API. You can learn how you can generate intermediate language code (in this case byte code).
Simple example
*
*Create a Java class with this function:
public String maxAsString(int a, int b) {
if (a > b) {
return Integer.valueOf(a).toString();
} else if (a < b) {
return Integer.valueOf(b).toString();
} else {
return "equals";
}
}
Now run BCELifier with this class
BCELifier bcelifier = new BCELifier("MyClass", System.out);
bcelifier.start();
You can see the result on the console for the whole class (how to build byte code MyClass.java). The code for the function is this:
private void createMethod_1() {
InstructionList il = new InstructionList();
MethodGen method = new MethodGen(ACC_PUBLIC, Type.STRING, new Type[] { Type.INT, Type.INT }, new String[] { "arg0", "arg1" }, "maxAsString", "MyClass", il, _cp);
il.append(InstructionFactory.createLoad(Type.INT, 1)); // Load first parameter to address 1
il.append(InstructionFactory.createLoad(Type.INT, 2)); // Load second parameter to adress 2
BranchInstruction if_icmple_2 = InstructionFactory.createBranchInstruction(Constants.IF_ICMPLE, null); // Do if condition (compare a > b)
il.append(if_icmple_2);
il.append(InstructionFactory.createLoad(Type.INT, 1)); // Load value from address 1 into the stack
il.append(_factory.createInvoke("java.lang.Integer", "valueOf", new ObjectType("java.lang.Integer"), new Type[] { Type.INT }, Constants.INVOKESTATIC));
il.append(_factory.createInvoke("java.lang.Integer", "toString", Type.STRING, Type.NO_ARGS, Constants.INVOKEVIRTUAL));
il.append(InstructionFactory.createReturn(Type.OBJECT));
InstructionHandle ih_13 = il.append(InstructionFactory.createLoad(Type.INT, 1));
il.append(InstructionFactory.createLoad(Type.INT, 2));
BranchInstruction if_icmpge_15 = InstructionFactory.createBranchInstruction(Constants.IF_ICMPGE, null); // Do if condition (compare a < b)
il.append(if_icmpge_15);
il.append(InstructionFactory.createLoad(Type.INT, 2));
il.append(_factory.createInvoke("java.lang.Integer", "valueOf", new ObjectType("java.lang.Integer"), new Type[] { Type.INT }, Constants.INVOKESTATIC));
il.append(_factory.createInvoke("java.lang.Integer", "toString", Type.STRING, Type.NO_ARGS, Constants.INVOKEVIRTUAL));
il.append(InstructionFactory.createReturn(Type.OBJECT));
InstructionHandle ih_26 = il.append(new PUSH(_cp, "equals")); // Return "equals" string
il.append(InstructionFactory.createReturn(Type.OBJECT));
if_icmple_2.setTarget(ih_13);
if_icmpge_15.setTarget(ih_26);
method.setMaxStack();
method.setMaxLocals();
_cg.addMethod(method.getMethod());
il.dispose();
}
A: I concur with the Dragon Book reference; IMO, it is the definitive guide to compiler construction. Get ready for some hardcore theory, though.
If you want a book that is lighter on theory, Game Scripting Mastery might be a better book for you. If you are a total newbie at compiler theory, it provides a gentler introduction. It doesn't cover more practical parsing methods (opting for non-predictive recursive descent without discussing LL or LR parsing), and as I recall, it doesn't even discuss any sort of optimization theory. Plus, instead of compiling to machine code, it compiles to a bytecode that is supposed to run on a VM that you also write.
It's still a decent read, particularly if you can pick it up for cheap on Amazon. If you only want an easy introduction into compilers, Game Scripting Mastery is not a bad way to go. If you want to go hardcore up front, then you should settle for nothing less than the Dragon Book.
A: Not included in the list so far is this book:
Basics of Compiler Design (Torben Mogensen)
(from the dept. of Computer Science, University of Copenhagen)
I'm also interested in learning about compilers and plan to enter that industry in the next couple of years. This book is the ideal theory book to begin learning compilers as far as I can see. It's FREE to copy and reproduce, cleanly and carefully written and gives it to you in plain English without any code but still presents the mechanics by way of instructions and diagrams etc. Worth a look imo.
A: "Let's Build a Compiler" is awesome, but it's a bit outdated. (I'm not saying it makes it even a little bit less valid.)
Or check out SLANG. This is similar to "Let's Build a Compiler" but is a much better resource especially for beginners. This comes with a pdf tutorial which takes a 7 step approach at teaching you a compiler. Adding the quora link as it have the links to all the various ports of SLANG, in C++, Java and JS, also interpreters in python and java, originally written using C# and the .NET platform.
A: As an starting point, it will be good to create a recursive descent parser (RDP) (let's say you want to create your own flavour of BASIC and build a BASIC interpreter) to understand how to write a compiler.
I found the best information in Herbert Schild's C Power Users, chapter 7. This chapter refers to another book of H. Schildt "C The complete Reference" where he explains how to create a calculator (a simple expression parser). I found both books on eBay very cheap.
You can check the code for the book if you go to www.osborne.com or check in www.HerbSchildt.com
I found the same code but for C# in his latest book
A: The Dragon Book is too complicated. So ignore it as a starting point. It is good and makes you think a lot once you already have a starting point, but for starters, perhaps you should simply try to write an math/logical expression evaluator using RD, LL or LR parsing techniques with everything (lexing/parsing) written by hand in perhaps C/Java. This is interesting in itself and gives you an idea of the problems involved in a compiler. Then you can jump in to your own DSL using some scripting language (since processing text is usually easier in these) and like someone said, generate code in either the scripting language itself or C. You should probably use flex/bison/antlr etc to do the lexing/parsing if you are going to do it in c/java.
A: I'm surprised it hasn't been mentioned, but Donald Knuth's The Art of Computer Programming was originally penned as a sort of tutorial on compiler writing.
Of course, Dr. Knuth's propensity for going in-depth on topics has led to the compiler-writing tutorial being expanded to an estimated 9 volumes, only three of which have actually been published. It's a rather complete exposition on programming topics, and covers everything you would ever need to know about writing a compiler, in minute detail.
A: Missing from the list: Garbage Collection: Algorithms for Automatic Dynamic Memory Management, by Jones and Lins.
(Assuming you're writing the compiler and runtime system, and that you're implementing a garbage collected language.
A: If you're looking to use powerful, higher level tools rather than building everything yourself, going through the projects and readings for this course is a pretty good option. It's a languages course by the author of the Java parser engine ANTLR. You can get the book for the course as a PDF from the Pragmatic Programmers.
The course goes over the standard compiler compiler stuff that you'd see elsewhere: parsing, types and type checking, polymorphism, symbol tables, and code generation. Pretty much the only thing that isn't covered is optimizations. The final project is a program that compiles a subset of C. Because you use tools like ANTLR and LLVM, it's feasible to write the entire compiler in a single day (I have an existence proof of this, though I do mean ~24 hours). It's heavy on practical engineering using modern tools, a bit lighter on theory.
LLVM, by the way, is simply fantastic. Many situations where you might normally compile down to assembly, you'd be much better off compiling to LLVM's Intermediate Representation instead. It's higher level, cross platform, and LLVM is quite good at generating optimized assembly from it.
A: If you have little time, I recommend Niklaus Wirth's "Compiler Construction" (Addison-Wesley. 1996), a tiny little booklet that you can read in a day, but it explains the basics (including how to implement lexers, recursive descent parsers, and your own stack-based virtual machines). After that, if you want a deep dive, there's no way around the Dragon book as other commenters suggest.
A: The quickest approach is through two books:
1990 version of An Introduction to Compiling Techniques, a First Course using ANSI C, LeX, and YaCC by JP Bennett - a perfect balance of example code, parsing theory and design- it contains a complete compiler written in C, lex and yacc for a simple grammar
Dragon Book (older version) - mostly a detailed reference for the features not covered in the former book
A: If you're not just looking for books, but also interested in web sites that have articles on the topic, I've blogged about various aspects of creating a programming language. Most of the posts can be found in my blog's "Language Design" category.
In particular, I cover generating Intel machine code manually, automatically generating machine- or bytecode, creating a bytecode interpreter, writing an object-oriented runtime, creating a simple loader, and writing a simple mark/sweep garbage collector. All of this in a very practical and pragmatic way instead of boring you with lots of theory.
Would appreciate feedback on these.
A: You might want to look into Lex/Yacc (or Flex/Bison, whatever you want to call them). Flex is a lexical analyzer, which will parse and identify the semantic components ("tokens") of your language, and Bison will be used to define what happens when each token is parsed. This could be, but is definitely not limited to, printing out C code, for a compiler that would compile to C, or dynamically running the instructions.
This FAQ should help you, and this tutorial looks quite useful.
A: Generally speaking, there's no five minutes tutorial for compilers, because it's a complicated topic and writing a compiler can take months. You will have to do your own search.
Python and Ruby are usually interpreted. Perhaps you want to start with an interpreter as well. It's generally easier.
The first step is to write a formal language description, the grammar of your programming language. Then you have to transform the source code that you want to compile or interpret according to the grammar into an abstract syntax tree, an internal form of the source code that the computer understands and can operate on. This step is usually called parsing and the software that parses the source code is called a parser. Often the parser is generated by a parser generator which transform a formal grammar into source oder machine code. For a good, non-mathematical explanation of parsing I recommend Parsing Techniques - A Practical Guide. Wikipedia has a comparison of parser generators from which you can choose that one that is suitable for you. Depending on the parser generator you chose, you will find tutorials on the Internet and for really popular parser generators (like GNU bison) there are also books.
Writing a parser for your language can be really hard, but this depends on your grammar. So I suggest to keep your grammar simple (unlike C++); a good example for this is LISP.
In the second step the abstract syntax tree is transformed from a tree structure into a linear intermediate representation. As a good example for this Lua's bytecode is often cited. But the intermediate representation really depends on your language.
If you are building an interpreter, you will simply have to interpret the intermediate representation. You could also just-in-time-compile it. I recommend LLVM and libjit for just-in-time-compilation. To make the language usable you will also have to include some input and output functions and perhaps a small standard library.
If you are going to compile the language, it will be more complicated. You will have to write backends for different computer architectures and generate machine code from the intermediate representation in those backends. I recommend LLVM for this task.
There are a few books on this topic, but I can recommend none of them for general use. Most of them are too academic or too practical. There's no "Teach yourself compiler writing in 21 days" and thus, you will have to buy several books to get a good understanding of this entire topic. If you search the Internet, you will come across some some online books and lecture notes. Maybe there's a university library nearby you where you can borrow books on compilers.
I also recommend a good background knowledge in theoretical computer science and graph theory, if you are going to make your project serious. A degree in computer science will also be helpful.
A: Take a look at the book below. The author is the creator of ANTLR.
Language Implementation Patterns: Create Your Own Domain-Specific and General Programming Languages.
A: One book not yet suggested but very important is "Linkers and Loaders" by John Levine. If you're not using an external assembler, you'll need a way to output a object file that can be linked into your final program. Even if you're using an external assembler, you'll probably need to understand relocations and how the whole program loading process works to make a working tool. This book collects a lot of the random lore around this process for various systems, including Win32 and Linux.
A: Big List of Resources:
*
*A Nanopass Framework for Compiler Education ¶
*Advanced Compiler Design and Implementation $
*An Incremental Approach to Compiler Construction ¶
*ANTLR 3.x Video Tutorial
*Basics of Compiler Design
*Building a Parrot Compiler
*Compiler Basics
*Compiler Construction $
*Compiler Design and Construction $
*Crafting a Compiler with C $
*Crafting Interpreters
*[Compiler Design in C] 12 ¶
*Compilers: Principles, Techniques, and Tools $ — aka "The Dragon Book"; widely considered "the book" for compiler writing.
*Engineering a Compiler $
*Essentials of Programming Languages
*Flipcode Article Archive (look for "Implementing A Scripting Engine by Jan Niestadt")
*Game Scripting Mastery $
*How to build a virtual machine from scratch in C# ¶
*Implementing Functional Languages
*Implementing Programming Languages (with BNFC)
*Implementing Programming Languages using C# 4.0
*Interpreter pattern (described in Design Patterns $) specifies a way to evaluate sentences in a language
*Language Implementation Patterns: Create Your Own Domain-Specific and General Programming Languages $
*Let's Build a Compiler by Jack Crenshaw — The PDF ¶ version (examples are in Pascal, but the information is generally applicable)
*Linkers and Loaders $ (Google Books)
*Lisp in Small Pieces (LiSP) $
*LLVM Tutorial
*Modern Compiler Implementation in ML $ — There is a Java $ and C $ version as well - widely considered a very good book
*Object-Oriented Compiler Construction $
*Parsing Techniques - A Practical Guide
*Project Oberon ¶ - Look at chapter 13
*Programming a Personal Computer $
*Programing Languages: Application and Interpretation
*Rabbit: A Compiler for Scheme¶
*Reflections on Trusting Trust — A quick guide
*Roll Your Own Compiler for the .NET framework — A quick tutorial from MSDN
*Structure and Interpretation of Computer Programs
*Types and Programming Languages
*Want to Write a Compiler? - a quick guide
*Writing a Compiler in Ruby Bottom Up
*Compiling a Lisp — compile directly to x86-64
Legend:
*
*¶ Link to a PDF file
*$ Link to a printed book
A: The Dragon Book is definitely the "building compilers" book, but if your language isn't quite as complicated as the current generation of languages, you may want to look at the Interpreter pattern from Design Patterns.
The example in the book designs a regular expression-like language and is well thought through, but as they say in the book, it's good for thinking through the process but is effective really only on small languages. However, it is much faster to write an Interpreter for a small language with this pattern than having to learn about all the different types of parsers, yacc and lex, et cetera...
A: If you're willing to use LLVM, check this out: http://llvm.org/docs/tutorial/. It teaches you how to write a compiler from scratch using LLVM's framework, and doesn't assume you have any knowledge about the subject.
The tutorial suggest you write your own parser and lexer etc, but I advise you to look into bison and flex once you get the idea. They make life so much easier.
A: The LCC compiler (wikipedia) (project homepage) (github.com/drh/lcc) of Fraser and Hanson is described in their book "A Retargetable C Compiler: Design and Implementation". It is quite readable and explains the whole compiler, down to code generation.
A: I found the Dragon book much too hard to read with too much focus on language theory that is not really required to write a compiler in practice.
I would add the Oberon book which contains the full source of an amazingly fast and simple Oberon compiler Project Oberon.
A: I am looking into the same concept, and found this promising article by Joel Pobar,
Create a Language Compiler for the .NET Framework - not sure where this has gone
Create a Language Compiler for the .NET Framework - pdf copy of the original doc
he discusses a high level concept of a compiler and proceeds to invent his own langauge for the .Net framework. Although its aimed at the .Net Framework, many of the concepts should be able to be reproduced. The Article covers:
*
*Langauge definition
*Scanner
*Parser (the bit im mainly interested in)
*Targeting the .Net Framework The
*Code Generator
there are other topics, but you get the just.
Its aimed to people starting out, written in C# (not quite Java)
HTH
bones
A: I remember asking this question about seven years ago when I was rather new to programming.
I was very careful when I asked and surprisingly I didn't get as much criticism as you are getting here. They did however point me in the direction of the "Dragon Book" which is in my opinion, a really great book that explains everything you need to know to write a compiler (you will of course have to master a language or two. The more languages you know, the merrier.).
And yes, many people say reading that book is crazy and you won't learn anything from it, but I disagree completely with that.
Many people also say that writing compilers is stupid and pointless. Well, there are a number of reasons why compiler development are useful:
*
*Because it's fun.
*It's educational, when learning how to write compilers you will learn a lot about computer science and other techniques that are useful when writing other applications.
*If nobody wrote compilers the existing languages wouldn't get any better.
I didn't write my own compiler right away, but after asking I knew where to start. And now, after learning many different languages and reading the Dragon Book, writing isn't that much of a problem. (I'm also studying computer engineering atm, but most of what I know about programming is self taught.)
In conclusion, The Dragon Book is a great "tutorial". But spend some time mastering a language or two before attempting to write a compiler. Don't expect to be a compiler guru within the next decade or so though.
The book is also good if you want to learn how to write parsers/interpreters.
A: If you are like me, who has no formal computer science education, and is interested in
building/want to know how a compiler works:
I am recommend "Programming Language Processors in Java: Compilers and Interpreters",
an amazing book for a self-taught computer programmer.
From my point of view, understanding those basic language theory, automate machine, and set theory is not a big problem. The problem is how to turn those things into code. The above book tells you how to write a parser, analysis context, and generate code. If you can not understand this book, then I have to say, give up building a compiler. The book is best programming book I have ever read.
There is an other book, also good, Compiler Design in C. There is a lot of code, and it tells you everything about how to build a compiler and lexer tools.
Building a compiler is a fun programming practice and can teach you heaps of programming skills.
Do not buy the Dragon book. It was a waste of money and time and is not for a practitioner.
A: Whenever I want to try out a new language idea, I just write a simple parser, and have it generate some language that's easy to get good compilers for, like C.
How do you think C++ was done?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "697"
} |
Q: How do I calculate these statistics? I'm writing an app to help facilitate some research, and part of this involves doing some statistical calculations. Right now, the researchers are using a program called SPSS. Part of the output that they care about looks like this:
They're really only concerned about the F and Sig. values. My problem is that I have no background in statistics, and I can't figure out what the tests are called, or how to calculate them.
I thought the F value might be the result of the F-test, but after following the steps given on Wikipedia, I got a result that was different from what SPSS gives.
A: This website might help you out a bit more. Also this one.
I'm working from a fairly rusty memory of a statistics course, but here goes nothing:
When you're doing analysis of variance (ANOVA), you actually calculate the F statistic as the ratio from the mean-square variances "between the groups" and the mean-square variances "within the groups". The second link above seems pretty good for this calculation.
This makes the F statistic measure exactly how powerful your model is, because the "between the groups" variance is explanatory power, and "within the groups" variance is random error. High F implies a highly significant model.
As in many statistical operations, you back-determine Sig. using the F statistic. Here's where your Wikipedia information comes in slightly handy. What you want to do is - using the degrees of freedom given to you by SPSS - find the proper P value at which an F table will give you the F statistic you calculated. The P value where this happens [F(table) = F(calculated)] is the significance.
Conceptually, a lower significance value shows a very strong ability to reject the null hypothesis (which for these purposes means to determine your model has explanatory power).
Sorry to any math folks if any of this is wrong. I'll be checking back to make edits!!!
Good luck to you. Stats is fun, just maybe not this part. =)
A: I assume from your question that your research colleagues want to automate the process by which certain statistical analyses are performed (i.e., they want to batch process data sets). You have two options:
1) SPSS is now scriptable through python (as of version 15) - go to spss.com and search for python. You can write python scripts to automate data analyses and extract key values from pivot tables, and then process the answers any way you like. This has the virtue of allowing an exact comparison between the results from your python script and the hand-calculated efforts in SPSS of your collaborators. Thus you won't have to really know any statistics to do this work (which is a key advantage)
2) You could do this in R, a free statistics environment, which could probably be scripted. This has the disadvantage that you will have to learn statistics to ensure that you are doing it correctly.
A: Statistics is hard :-). After a year of reading and re-reading books and papers and can only say with confidence that I understand the very basics of it.
You might wish to investigate ready-made libraries for whichever programming language you are using, because they are many gotcha's in math in general and statistics in particular (rounding errors being an obvious example).
As an example you could take a look at the R project, which is both an interactive environment and a library which you can use from your C++ code, distributed under the GPL (ie if you are using it only internally and publishing only the results, you don't need to open your code).
A: In short: don't do this by hand, link/use existing software. And sain_grocen's answer is incorrect. :(
These are all tests for significance of parameter estimates that are typically used in Multivariate response Multiple Regressions. These would not be simple things to do outside of a statistical programming environment. I would suggest either getting the output from a pre-existing statistical program, or using one that you can link to and use that code.
I'm afraid that the first answer (sain_grocen's) will lead you down the wrong path. His explanation is likely of a special case of what you are actually dealing with. The anova explained in his links is for a single variate response, in a balanced design. These aren't the F statistics you are seeing. The names in your output (Pillai's Trace, Hotelling's Trace,...) are some of the available multivariate versions. They have F distributions under certain assumptions. I can't explain a text books worth of material here, I would advise you to start by looking at
"Applied Multivariate Statistical Analysis" by Johnson and Wichern
A: Can you explain more why SPSS itself isn't a fine solution to the problem? Is it that it generates pivot tables as output that are hard to manipulate? Is it the cost of the program?
F-statistics can arise from any number of particular tests. The F is just a distribution (loosely: a description of the "frequencies" of groups of values), like a Normal (Gaussian), or Uniform. In general they arise from ratios of variances. Opinion: many statisticians (myself included), find F-based tests to be unstable (jargon: non-robust).
The particular output statistics (Pillai's trace, etc.) suggest that the original analysis is a MANOVA example, which as other posters describe is a complicated, and hard to get right procedure.
I'm guess also that, based on the MANOVA, and the use of SPSS, this is a psychology or sociology project... if not please enlighten. It might be that other, simpler models might actually be easier to understand and more repeatable. Consult your local university statistical consulting group, if you have one.
Good luck!
A: Here's an explanation of MANOVA ouptput, from a very good site on statistics and on SPSS:
Output with explanation:
http://faculty.chass.ncsu.edu/garson/PA765/manospss.htm
How and why to do MANOVA or multivariate GLM:
(same path as above, but terminating in '/manova.htm')
Writing software from scratch to calculate these outputs would be both lengthy and difficult;
there's lots of numerical problems and matrix inversions to do.
As Henry said, use Python scripts, or R. I'd suggest working with somebody who knows SPSS if scripting.
In addition, SPSS itself is capable of exporting the output tables to files using something called OMS.
A script within SPSS can do this.
Find out who in your research group knows SPSS and work with them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: ASP.NET built in user profile vs. old style user class/tables I am looking for guidance regarding the best practice around the use of the Profile feature in ASP.NET.
How do you decide what should be kept in the built-in user Profile, or if you should create your own database table and add a column for the desired fields? For example, a user has a zip code, should I save the zip code in my own table, or should I add it to the web.config xml profile and access it via the user profile ASP.NET mechanism?
The pros/cons I can think of right now are that since I don't know the profile very well (it is a bit of a Matrix right now), I probably can do whatever I want if I go the table route (e.g., SQL to get all the users in the same zip code as the current user). I don't know if I can do the same if I use the ASP.NET profile.
A: Ive only built 2 applications that used the profile provider. Since then I have stayed away from using it. For both of the apps I used it to store information about the user such as their company name, address and phone number.
This worked fine until our client wanted to be able to find a user by one of these fields.
Searching involved looping through every users profile and comparing the information to the search criteria. As the user base grew the search time became unacceptable to our client. The only solution was to create a table to store the users information. Search speed was increased immensely.
I would recommend storing this type of information in its own table.
A: user profile is a nice clean framework for individual customization(AKA. Profile Properties). (e.g. iGoogle)
the problem of it is its not designed for query and not ideal for data sharing to public user.(you still would be able to do it, with low performance)
so, if you want to enhance the customized user experience, user profile would be a good way to go. otherwise, use your own class and table would be a much better solution.
A: In my experience its best to keep an the info in the profile to a bare minimum, only put the essentials in there that are directly needed for authentication. Other information such as addresses should be saved in your own database by your own application logic, this approach is more extensible and maintainable.
A: I think that depends on how many fields you need. To my knowledge, Profiles are essentially a long string that gets split at the given field sizes, which means that they do not scale very well if you have many fields and users.
On the other hand, they are built in, so it's an easy and standardized way, which means there is not a big learning curve and you can use it in future apps as well without needing to tweak it to a new table structure.
Rolling your own thing allows you to put it in a properly normalized database, which drastically improves performance, but you have to write pretty much all the profile managing code yourself.
Edit: Also, Profiles are not cached, so every access to a profile goes to the database first (it's then cached for that request, but the next request will get it from the database again)
If you're thinking about writing your own thing, maybe a custom Profile Provider gives you the best of both worlds - seamless integration, yet the custom stuff you want to do.
A: I think it is better off using it for supplementary data that is not critical to the user that is only normally important when that user is logging in anyway. Think data that would not break anything important if it was all wiped.
of course thats personal preference but others have raised some other important issues.
Also very useful considering it can be used for an unauthenticated user whose profile is maintained with an anonymous cookie.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How to pass enumerated values to a web service My dilemma is, basically, how to share an enumeration between two applications.
The users upload documents through a front-end application that is on the web. This application calls a web service of the back-end application and passes the document to it. The back-end app saves the document and inserts a row in the Document table.
The document type (7 possible document types: Invoice, Contract etc.) is passed as a parameter to the web service's UploadDocument method. The question is, what should the type (and possible values) of this parameter be?
Since you need to hardcode these values in both applications, I think it is O.K. to use a descriptive string (Invoice, Contract, WorkOrder, SignedWorkOrder).
Is it maybe a better approach to create a DocumentTypes enumeration in the first application, and to reproduce it also in the second application, and then pass the corresponding integer value to the web service between them?
A: I'd suggest against passing an integer between them, simply for purposes of readability and debugging. Say you're going through your logs and you see a bunch of 500 errors for DocumentType=4. Now you've got to go look up which DocumentType is 4. Or if one of the applications refers to a number that doesn't exist in the other, perhaps due to mismatched versions.
It's a bit more code, and it rubs the static typing part of the brain a bit raw, but in protocols on top of HTTP the received wisdom is to side with legible strings over opaque enumerations.
A: I would still use enumeration internally but would expect consumers to pass me only the name, not the numeric value itself.
just some silly example to illustrate:
public enum DocumentType
{
Invoice,
Contract,
WorkOrder,
SignedWorkOrder
}
[WebMethod]
public void UploadDocument(string type, byte[] data)
{
DocumentType docType = (DocumentType)Enum.Parse(typeof(DocumentType), type);
}
A: I can only speak about .net, but if you have an ASP.net Webservice, you should be able to add an enumeration directly to it.
When you then use the "Add Web Reference" in your Client Application, the resulting Class should include that enum
But this is from the top of my head, i'm pretty sure i've done it in the past, but I can't say for sure.
A: In .NET, enumeration values are (by default) serialized into xml with the name. For instances where you can have multiple values (flags), then it puts a space between the values. This works because the enumeration doesn't contain spaces, so you can get the value again by splitting the string (ie. "Invoice Contract SignedWorkOrder", using lubos's example).
You can control the serialization of values of in asp.net web services using the XmlEnumAttribute, or using the EnumMember attribute when using WCF.
A: If you are consuming your Web service from a .NET page/application, you should be able to access the enumeration after you add your Web reference to the project that is consuming the service.
A: If you are not working with .NET to .NET SOAP, you can still define an enumerator provided both endpoints are using WSDL.
<s:simpleType name="MyEnum">
<s:restriction base="s:string">
<s:enumeration value="Wow"/>
<s:enumeration value="This"/>
<s:enumeration value="Is"/>
<s:enumeration value="Really"/>
<s:enumeration value="Simple"/>
</s:restriction>
</s:simpleType>
Its up to the WSDL -> Proxy generator tool to parse that into a enum equivalent in the client language.
A: There are some fairly good reasons for not using enums on an interface boundary like that. Consider Dare's post on the subject.
A: I've noticed that when using "Add Service Reference" as opposed to "Add Web Reference" from VS.net, the actual enum values come across as well as the enum names. This is really annoying as I need to support both 2.0 and 3.5 clients. I end up having to go into the 2.0 generated web service proxy code and manually adding the enum values every time I make a change!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: What is the single most influential book every programmer should read? If you could go back in time and tell yourself to read a specific book at the beginning of your career as a developer, which book would it be?
I expect this list to be varied and to cover a wide range of things.
To search: Use the search box in the upper-right corner. To search the answers of the current question, use inquestion:this. For example:
inquestion:this "Code Complete"
A: K&R
@Juan: I know Juan, I know - but there are some things that can only be learned by actually getting down to the task at hand. Speaking in abstract ideals all day simply makes you into an academic. It's in the application of the abstract that we truly grok the reason for their existence. :P
@Keith: Great mention of "The Inmates are Running the Asylum" by Alan Cooper - an eye opener for certain, any developer that has worked with me since I read that book has heard me mention the ideas it espouses. +1
A: Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp by Peter Norvig
I started reading it because I wanted to learn Common Lisp. When I was halfway, I realized this was the greatest book about programming I had read so far.
A: Discrete Mathematics For Computer Scientists http://ecx.images-amazon.com/images/I/51HCJ5R42KL._SL500_BO2,204,203,200_AA219_PIsitb-sticker-dp-arrow,TopRight,-24,-23_SH20_OU02_.jpg
Discrete Mathematics For Computer Scientists by J.K. Truss.
While this doesn't teach you programming, it teaches you fundamental mathematics that every programmer should know. You may remember this stuff from university, but really, doing predicate logic will improve you programming skills, you need to learn Set Theory if you want to program using collections.
There really is a lot of interesting information in here that can get you thinking about problems in different ways. It's handy to have, just to pick up once in a while to learn something new.
A: Systemantics: How Systems Work and Especially How They Fail. Get it used cheap. But you might not get the humor until you've worked on a few failed projects.
The beauty of the book is the copyright year.
Probably the most profound takeaway "law" presented in the book:
The Fundamental Failure-Mode Theorem (F.F.T.): Complex systems usually operate in failure mode.
The idea being that there are failing parts in any given piece of software that are masked by failures in other parts or by validations in other parts. See a real-world example at the Therac-25 radiation machine, whose software flaws were masked by hardware failsafes. When the hardware failsafes were removed, the software race condition that had gone undetected all those years resulted in the machine killing 3 people.
A: Definitively Software Craftsmanship
alt text http://ecx.images-amazon.com/images/I/5186JKTDVWL._SL500_AA240_.jpg
This book explains a lot of things about software engineering, system development. It's also extremly useful to understand the difference between different kind of product developement: web VS shrinkwrap VS IBM framework. What people had in mind when they conceived waterfall model? Read this and all we'll become clear (hopefully)
A: One of my personal favorites is Hacker's Delight, because it was as much fun to read as it was educational.
I hope the second edition will be released soon!
A: Concepts, Techniques, and Models of Computer Programming.
alt text http://ecx.images-amazon.com/images/I/51YZ50ZR13L._SL500_AA240_.jpg
A: Extreme Programming Explained: Embrace Change by Kent Beck. While I don't advocate a hardcore XP-or-the-highway take on software development, I wish I had been introduced to the principles in this book much earlier in my career. Unit testing, refactoring, simplicity, continuous integration, cost/time/quality/scope - these changed the way I looked at development. Before Agile, it was all about the debugger and fear of change requests. After Agile, those demons did not loom as large.
A: The practice of programming. By Brian W. Kernighan, Rob Pike.
The style shown here is excellent - the code just speaks for itself, and the whole book follows the KISS principle. Personally not my languages of choice, but still influential to me.
A: Types and Programming Languages by Benjamin C Pierce for a thorough understanding of the underpinnings of programming languages.
A: alt text http://ecx.images-amazon.com/images/I/51E0Ojkz8iL._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA300_SH20_OU01_.jpg
A: Database System Concepts is one of the best books you can read on understanding good database design principles.
A: Programming from the ground up. It's free on the internet. This book taught me AT&T asm. It is very easy to read.
A: @Peter Coulton -- you don't read Knuth, you study it.
For me, and my work... Purely Functional Data Structures is great for thinking and developing with functional languages in mind.
A: "The World is Flat" by Thomas Friedman.
Excellence in programming demands an investment of mental energy and a dedication to continued learning comparable to the professions of medicine or law. It pays a fraction of what those professions pay, much less the wages paid to the mathematically savvy who head into the finance sector. And wages for constructing code are eroding because it's a profession that is relatively easy for the intelligent and self-disciplined in most economies to enter.
Programming has already eroded to the point of paying less than, say, plumbing. Plumbing can't be "offshored." You don't need to pay $2395 to attend the Professional Plumber's Conference every other year for the privilege of receiving an entirely new set of plumbing technologies that will take you a year to learn.
If you live in North America or Europe, are young, and are smart, programming is not a rational career choice. Businesses that involve programming, absolutely. Study business, know enough about programming to refine your BS detector: brilliant. But dedicating the lion's share of your mental energy to the mastery of libraries, data structures, and algorithms? That only makes sense if programming is something more to you than an economic choice.
If you love programming and for that reason intend to make it your career, then it behooves you to develop a cold-eyed understanding of the forces that are, and will continue, to make it a harder and harder profession in which to make a living. "The World is Flat" won't teach you what to name your variables, but it will immerse you for 6 or 8 hours in economic realities that have already arrived. If you can read it, and not get scared, then go out and buy "Code Complete."
A:
This last year I took a number of classes. I read
The Innovator's Dilemma (disruptive tech)
The Mythical Man Month (managing software)
Crossing the Chasm (startup)
Database Management Systems, The COW Book
Programming C#, The OSTRICH Book
Beginning iPhone Developmen, The GRAPEFRUIT Book
Each book was amazing but the Innovator's Dilemma by Clayton Christensen (1997!!!) is really a fantastic book, and it got me really thinking about the modern software world. The challenge addressed is disruptive technology, and how disk drive companies and non-technical companies are always disrupted by new, game changing technology. It gives one a new perspective when thinking about Google, probably the biggest 'web' company. Why do they have their hands in EVERYTHING? It's because they don't want to have their position disrupted by something new. The preview on google is plenty to get the idea. Read it!
A: hackers, by Steven Levy.
The personality and way of life must come first. Everything else can be learned.
A: The New Turing Omnibus http://ecx.images-amazon.com/images/I/51HlYd-%2BRwL._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA300_SH20_OU01_.jpg
Really good book. Has a high-level taste of the most important areas of computer science. Yes, CS != programming, but this is still useful to every programmer.
A: Object Oriented Analysis and Design with Applications by Brady Booch
A: The Practice of Programming
and
How to solve it by computer
alt text http://img.infibeam.com/img/7101e0ee/496b1/05/629/P-M-B-9788131705629.jpg?hei=200&wid=160&op_sharpen=1
A: The Python language was very influential to me, I wish I would have read these book years ago. The beauty and simplicity of the Python language really affected how I wrote code in other languages.
A: The Mythical Man-Month by Fred Brooks
http://en.wikipedia.org/wiki/The_Mythical_Man-Month
A: I think that "The Art of Unix Programming" is an excellent book, by an excellent hacker/brilliant mind as Eric S. Raymond, who tries to make us understand a few principles of software design (simplicity mainly). This book is a must for every programming who is about to start a project under Unix platform.
A: While I agree that many of the books above are must-reads (Pragmatic Programmer, Mythical Man-Month, Art of Computer Programming, and SICP come to mind immediately), I'd like to go in a slightly different direction and recommend A Discipline of Programming by Edsger Dijkstra. Even though it's 32 years old, the emphasis on "design for verifiability" is highly relevant (even if "verifiability" means "proof" instead "unit tests").
A: Code Craft by Pete Goodliffe is a good read!
Code Craft http://ecx.images-amazon.com/images/I/51WZ9AEC3GL._SL500_BO2,204,203,200_PIsitb-dp-500-arrow,TopRight,45,-64_OU01_AA240_SH20_.jpg
A: Martin Fowler's Refactoring: Improving the Design of Existing Code has already been listed. But I will detail why it has impacted me.
The essence of the whole book is about structuring code so that it is simpler to read and understand by humans. It teaches me strongly that the code that I write is meant for my colleagues and successors to consume and possibly learn something good out of it. It inspires me to consciously program in a manner that leaves people praising my name, and not cursing me to damnation for all eternity.
A: alt text http://ecx.images-amazon.com/images/I/61dECNkdnTL._SL500_AA240_.jpg
C++ How to Program It is good for beginner.This is excellent book that full complete with 1500 pages.
A: Masters of doom. As far as motivation and love for your profession go: it won't get any better than what's been described in this book, truthfully inspiring story!
A: Here's an excellent book that is not as widely applauded, but is full of deep insight: Agile Software Development: The Cooperative Game, by Alistair Cockburn.
What's so special about it? Well, clearly everyone has heard the term "Agile", and it seems most are believers these days. Whether you believe or not, though, there are some deep principles behind why the Agile movement exists. This book uncovers and articulates these principles in a precise, scientific way. Some of the principles are (btw, these are my words, not Alistair's):
*
*The hardest thing about team software development is getting everyone's brains to have the same understanding. We are building huge, elaborate, complex systems which are invisible in the tangible world. The better you are at getting more peoples' brains to share deeper understanding, the more effective your team will be at software development. This is the underlying reason that pair programming makes sense. Most people dismiss it (and I did too initially), but with this principle in mind I highly recommend that you give it another shot. You wind up with TWO people who deeply understand the subsystem you just built ... there aren't many other ways to get such a deep information transfer so quickly. It is like a Vulcan mind meld.
*You don't always need words to communicate deep understanding quickly. And a corollary: too many words, and you exceed the listener/reader's capacity, meaning the understanding transfer you're attempting does not happen. Consider that children learn how to speak language by being "immersed" and "absorbing". Not just language either ... he gives the example of some kids playing with trains on the floor. Along comes another kid who has never even SEEN a train before ... but by watching the other kids, he picks up the gist of the game and plays right along. This happens all the time between humans. This along with the corollary about too many words helps you see how misguided it was in the old "waterfall" days to try to write 700 page detailed requirements specifications.
There is so much more in there too. I'll shut up now, but I HIGHLY recommend this book!
A: Kernighan & Plauger's Elements of Programming Style.
It illustrates the difference between gimmicky-clever and elegant-clever.
A: The TCP/IP Guide, by Charles M. Kozierok
Although it is described as an 'encyclopedic reference', it is incredibly readable as a narrative.
This author provides a very , very, very well written, comprehensive, introduction to networking and the infrastructure that underlies the web. Something all programmers ought to know.
For me it is the natural follow-on from Charles Petzold's 'Code'. If "Code" explains to the layman how computers work, 'The TCP/IP Guide' explains how they connect together.
If you gave a 12 year old geek a copy 'Code' and a copy of 'The TCP/IP Guide' - they'd be building the next Google by the age of 17.
In other words, if I could go back in time and tell myself to read a specific book at the beginning of my career as a developer, this (plus Code) is up there in the top of my list.
A: I've been arounda while, so most books that I have found influential don't necessarily apply today. I do believe it is universally important to understand the platform that you are developing for (both hardware and OS). I also think it's important to learn from other peoples mistakes. So two books I would recommend are:
Computing Calamities and In Search of Stupidity: Over Twenty Years of High Tech Marketing Disasters
A: The Pragmatic Programmer: From Journeyman to Master without a doubt. The advice in it is so well presented, and simple, that it comes across as if it was 'The Common Sense Programmer'. Love it.
A: Mastering Regular Expressions
A: In no particular order except how they're arranged on my bookshelf:
*
*The Pragmatic Programmer
*Rafactoring by Fowler
*Working Effectively with Legacy Code by Feathers. This is practically a companion volume to Refactoring.
*UML Distilled by Fowler. Among its other virtues is brevity.
*Debugging the Development Process by Steve Maguire
*Design Patterns (aka "Gang of Four") by Gamma et al
A: Mr. Bunny's Guide To ActiveX
A: I have a few good books that strongly influenced me that I've not seen on this list so far:
The Psychology of Everyday Things by Donald Norman. The general principles of design for other people. This may seem to be mostly good for UI but if you think about it, it has applications almost anywhere there is an interface that someone besides the original developer has to work with; e. g. an API and designing the interface in such a way that other developers form the correct mental model and get appropriate feedback from the API itself.
The Art of Software Testing by Glen Myers. A good, general introduction to testing software; good for programmers to read to help them think like a tester i. e. think of what may go wrong and prepare for it.
By the way, I realize the question was the "Single Most Influential Book" but the discussion seems to have changed to listing good books for developers to read so I hope I can be forgiven for listing two good books rather than just one.
A: Do users ever touch your code? If you're not doing solely back-end work, I recommend About Face: The Essentials of User Interface Design — now in its third edition (linked). I used to think my users were stupid because they didn't "get" my interfaces. I was, of course, wrong. About Face turned me around.
A: Rapid Development by McConnell
A: As I started out developing in Java (and am still doing so to this very day) I'd have to recommend the outstanding work in the field: Mr Bunny's Big Cup o' Java.
From the author's blurb:
There is simply no better way to learn Java than to have the pineal gland of an expert Java programmer surgically implanted in your brain. Sadly, most HMOs refuse to pay for this career saving procedure, deeming Java to be too experimental. At last there is an alternative treatment for those of us who cannot wait for sweeping health care reforms.
Mr. Bunny’s Big Cup O’ Java is recommended by n out of ten doctors, where n is any integer you wish to make up to impress an astoundingly gullible public. The book begins with an overview of the book, and quickly expands into the book itself. Just look at the topics covered:
*
*Java
In short, MBBCOJ will teach you all you need to know for a successful career in today’s rabbit development environments.
The insight into pixels alone would have cut years off my software developing life.
A: "The Practice of programming" by Brian W.Kerninghan & Rob Pike.
The language is easy and also the subject matter is interesting.
A: alt text http://ecx.images-amazon.com/images/I/51fhwR6eb3L._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA300_SH20_OU01_.jpg
alt text http://ecx.images-amazon.com/images/I/51PDNR3C40L._SL500_AA300_.jpg
A: Refactoring
A: There are a lot of votes for Steve McConnell's Code Complete, but what about his Software Project Survival Guide book? I think they're both required reading but for different reasons.
A: This one isnt really a book for the beginning programmer, but if you're looking for SOA design books, then SOA in Practice: The Art of Distributed System Design is for you.
A: Software Tools by by Brian W. Kernighan and P. J. Plauger
It had a profound influence on how I write software.
A: Facts and Fallacies of Software Engineering by Robert L. Glass http://www.codinghorror.com/blog/images/facts-and-fallacies-of-software-engineering.jpg
Facts and Fallacies of Software Engineering by Robert L. Glass is a really excellent book. I had been a professional hacker for almost 10 years before I read it, and a I still learned a ton of stuff.
A: Not the most influential, but worth a look is Youth by J.M.Coetzee.
The narrator of Youth, a student in the South Africa of the 1950s, has long been plotting an escape from his native country: from the stifling love of his mother, from a father whose failures haunt him, and from what he is sure is impending revolution. Studying mathematics, reading poetry, saving money, he tries to ensure that when he arrives in the real world, wherever that may be, he will be prepared to experience life to its full intensity, and transform it into art. Arriving at last in London, however, he finds neither poetry nor romance. Instead he succumbs to the monotony of life as a computer programmer, from which random, loveless affairs offer no relief. Devoid of inspiration, he stops writing. An awkward colonial, a constitutional outsider, he begins a dark pilgrimage in which he is continually tested and continually found wanting.
youth cover http://img440.imageshack.us/img440/5140/youthgd4.jpg
A: Perfect Software: And Other Illusions about Testing
TITLE Cover http://ecx.images-amazon.com/images/I/51j3BSRspAL._SL500_AA240_.jpg
Perfect Software: And Other Illusions about Testing by Gerald M. Weinberg
ISBN-10: 0932633692
ISBN-13: 978-0932633699
A: Design Concepts in Programming Languages by FA Turbak produces detailed implementations of many programming concepts and is very useful for understanding what's going on underneath the hood.
A: The Back of the Napkin, by Dan Roam.
The Back of the Napkin http://www.coverbrowser.com/image/bestsellers-2008/302-7.jpg
A great book about visual thinking techniques. There is also an expanded edition now. I can't speak to that version, as I do not own it; yet.
A: Enterprise Patterns and MDA: Building Better Software with Archetype Patterns and UML
An excellent read for those looking to leverage ORM and UML
A: Code Complete is the number one choice, but I'd also cite Gang of Four's Design Patterns and Craig Larman's Applying UML and Patterns.
The Timeless Way of Building, by Christopher Alexander, is another great one. Even though it's about archtecture, it's included in the bibliography of many great programming books I have already read.
Another one, from which I'm learning lots of new things, is Data Access Patterns, by Clifton Nock.
A: I recently read Dreaming in Code and found it to be an interesting read. Perhaps more so since the day I started reading it Chandler 1.0 was released. Reading about the growing pains and mistakes of a project team of talented people trying to "change the world" gives you a lot to learn from. Also Scott brings up a lot of programmer lore and wisdom in between that's just an entertaining read.
Beautiful Code had one or two things that made me think differently, particularly the chapter on top down operator precedence.
A: Debugging the Development Process: Practical Strategies for Staying Focused, Hitting Ship Dates, and Building Solid Teams by Steve Maguire.
No-non-sense, down-to-earth, entertaining, profound.
A: Programming Perl (O'Reilly)
A: Lean Software Development by Mary and Tom Poppendieck is definitely one for every developers bookshelf
A: Effective C++ and More Effective C++ by Scott Myers.
A: Object-Oriented Software Construction by Bertrand Meyer
A: Nobody seems to have mentioned Stroustup's The C++ Programming Language which is a great book that every C++ programmer should read.
I also think that Extreme Programming Explained: Embrace Change should be read by every programmer and manager. Many of the ideas in the book are common knowledge now but the book gives an intelligent and inspiring account of the pursuit of quality in software engineering.
I would second the recommendations for Knuth and Gang of Four which are classics.
A: Advanced Programming in the UNIX Environment by W. Richard Stevens.
A: Three books come to mind for me.
*
*The Art of Unix Programming by Eric S. Raymond.
*The Wizardry Compiled by Rick Cook.
*The Art of Computer Programming by Donald Knuth.
I also love the writing of Paul Graham.
A: Adding to the great ones mentioned above:
Patterns of Enterprise Application Architecture
Enterprise Integration Patterns
A: How influential a book is often depends on the reader and where they were in their career when they read the book. I have to give a shout-out to Head First Design Patterns. Great book and the very creative way it's written should be used as an example for other tech book writers. I.e. it's written in order to facilitate learning and internalizing the concepts.
Head First Design Patterns http://ecx.images-amazon.com/images/I/51LSqrgoT1L._SS500_.jpg
A: My vote is "How to Think Like a Computer Scientist: Learning With Python"
It's available both as a book and as a free e-book.
It really helped me to understand the basics of not just Python but programming in general. Although it uses Python to demonstrate concepts, they apply to most, if not all, programming languages. Also: IT'S FREE!
A:
Mastery: The Keys to Success and Long-Term Fulfillment, by George Leonard
It's about about what mindsets are required to reach mastery in any skill, and why. It's just awesome, and an easy read too.
A: This isn't a direct answer to the question, because I feel it's already been answered above, however, one of the books that definitely had an impact on how I code is Code Reading, Volume 1: The Open Source Perspective.
alt text http://g.bookpool.com/covers/405/0201799405_140_30O.gif
A: to get advanced in prolog i like these two books:
The Art of Prolog
The Craft of Prolog
really opens the mind for logic programming and recursion schemes.
A: It's not strictly a development book and I believe that I've mentioned it in another answer somewhere but it's a book I really believe all developers should read, from php to Java to assembly developers.
Code
It really brings together what's under the hood in a computer, why memory shouldn't be wasted and some of the more interesting parts of the history of computing. It's an introduction to the computer and what it is. It gave me my ultimate passion for low level programming and helped me understand pointers and memory more than any other computer.
A: I think code complete is going to be a hugely popular one for this question, for me it corrected many of my bad habits and re-affirmed my good practices.
Also for my Perl background I really like Perl Best Practices from Damian Conway. Perl can be a nasty language if you don't use style and best practices, which is what I was seeing in the scripts I was reading ( and sometimes writing ) .
I like the Head First Series, they are quite good and easy to read when your are not in the mood for more serious style books.
A: Cocoa Programming for Mac OS X by Aaron Hillegass
A: This one started me off into true OOA&D.
Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development - Craig Larman
These would be up there as well:
*
*Patterns in Enterprise Application Architecture - Fowler
*Domain-Driven Design - Eric Evans
A: The Unix Programming Environment by Kernighan and Pike.
More than any other book, it taught me the benefits in building small, easily-tested tools that can be combined to do big things.
A: Extreme Programming by Kent Beck
A: http://ecx.images-amazon.com/images/I/519J3P8ANML._SL500_AA240_.jpg
Took my programing to a whole new level.
A: Coder to Developer, by Mike Gunderloy.
A: The most influential programming book for me was Enough Rope to Shoot Yourself in the Foot by Allen Holub.
Cover of the book http://ecx.images-amazon.com/images/I/71AE90J735L._SL500_AA240_.gif
O, well, how long ago it was.
A: Whether you are coding in Smalltalk or not Smalltalk Best Practice Patterns is a great read. Full of small observations that will change the way you code; for the better.
A: I am surprised there is no mention yet of this book: Starting Forth, by Leo Brodie. After all Forth, being a stack-based language, should fit the audience on this site...
Admittedly, Forth is a weird language and not very popular these days. But this book is a joy to read. And it has cartoons! The book, as well as Brodie's other book, Thinking Forth, are both available free on the web.
A: A Whole New Mind, by Daniel Pink. Interesting take on the future of our industry.
I assume most of the folks reading this will have read the books at the top of the list already. So, i'll offer a book that takes a different look at our industry.
alt text http://www.danpink.com/images/wnm.jpg
A: Applying UML and Patterns by Craig Larman.
The title of the book is slightly misleading; it does deal with UML and patterns, but it covers so much more. The subtitle of the book tells you a bit more: An Introduction to Object-Oriented Analysis and Design and Iterative Development.
A: For me it was Design Patterns Explained it provided an 'Oh that's how it works' moment for me in regards to design patterns and has been very useful when teaching design patterns to others.
A: I read most of the books having an high score on this question - but not all of them (thanks God !) and I added the others one to my Amazon Wish List right away !
(Someone should create a list on Amazon for these books... Maybe a list named : "Stackoverflow best books ever" ? Anyone know how to do that ?)
To me, the best book ever has been Code Complete. It was a revelation. I bought the 2nd edition in english and then in French and I still think it should be a mandatory reading in any computer science school. Data structure is cool but Code complete, no joke, is much more important...
Then, my second best book was Writing Solid Code - having learn how to be understood, it was great to know how to write solid code.
Then a lot of very nice books but no one to mention here. Until 2001, I think : Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries. A jewel ! I read this book many times and it's still on my desk, just beside my LCD, along with Code Complete (really !). I Love the way it has been written (love the comment that has been added here and there - books should all be written like that !)
But well, I forget the very first great books I've read ! The ones who make me love computer science, with passion :
*
*Compute! (C64 magazine - Will never forget Jim Butterfly :o)
*Borland C++ User Guides (the old ones, circa 1991, those who tried to introduce object oriented programming, very nicely written).
*Most Microsoft Developpement Tools User Guides, circa 1990-1995. Don't know who were writing them, but they was pretty cool ! I remember reading them late in the night, on saturdays...
Well, excellent question :o)
A: All the Thinking in... books.
Bruce Eckel is THE genious of pedagogy!
It's so easy to understand the implementation of polymorphism in C++. It contains all that you should known about C++, basic and advanced concepts. Way better than the Stroustrup's.
I learnt Java with him too.
And last but not the least:
The C++ one is free !
http://www.mindview.net/Books/TICPP/ThinkingInCPP2e.html
A: Since I'm a C# programmer and most generic books already has been mentioned I'd like to recommend Bill Wagner's book "More Effective C#.
I think most people that develop composite WPF-applications also should have a look at Microsoft's Composite Application Guidance (also known as Prism):
Composite Application Guidance
A: Peter Norton's Assembly Language Book for the IBM PC
I had spent countless nights in front of the pc (DOS), exploring unknown worlds :-D
A: Advanced Programming in the UNIX environment - W. Richard Stevens
A: Pragmatic Programmer
A: Working Effectively with Legacy Code is a really amazing book that goes into great detail about how to properly unit test your code and what the true benefit of it is. It really opened my eyes.
A: Read Head First Design Patterns for a much more accessible introduction than the GoF book. I remember feeling like I'd leveled up after each chapter.
Kent Beck's Test Driven Development by Example for TDD.
A: I'm a big fan of most titles by Robert C. Martin, especially Agile Software Development, Principles, and Practices and Clean Code: A Handbook of Agile Software Craftsmanship.
A: I found "The art of Prolog" a very good read.
A: I think I grew up in a different generation than most here....
One of the most influential books I read, was APUE.
Or pretty much anything by W. Richard Stevens.
A: Roger S. Pressman - Software Engineering (A Practitioners Approach). It has got a lot of usefull information.
A: It's a toss up between Head First Design Patterns, for many of the reasons cited above, and Perl Testing: A Developer's Notebook, which should be one of the bibles for any Perl programmer wanting to write maintainable code.
A: Win32 Programming by Charles Petzold
A: I suppose we could ask the same top rated question every couple of weeks and upmod all those who mention code complete or The Pragmatic Programmer.
Not that there is anythng wrong with it :-)
A: "The Design and Evolution of C++" by Bjarne Stroustrup
Besides giving much background on C++, it is also a lengthy study on the trade-offs and design concerns involved in a large scale program.
BN.com
A: While not strictly a software development book, I would highly recommend that Don't Make me Think! be considered in this list.
A: Expert C Programming: Deep C Secrets by Peter Van Der Linden
A: My high school math teacher lent me a copy of Are Your Lights Figure Problem that I have re-read many times. It has been invaluable, as a developer, and in life generally.
A: The question is, "What book really made an impact of how you work as a developer?" Without any doubt, Programming Windows with MFC, by Jeff Prosise, is the book that had the greatest impact on HOW I work as a developer. It did not teach me the fundamentals of "programming" but it opened the world of Windows platform development to me and many thousands of other developers.
I had written a little Windows code previously in the "Petzold style" before MFC was developed. I quickly decided the Windows platform we just not worth the trouble as a developer. When Prosise came out with his MFC book, I realized (along with thousands of other non-Windows programmers) that I could create an easy to use interface that users would not just understand, but actually enjoy using. I devoured the book, making so many notes in it and turning down so many corners, I eventually bought a second copy.
Prosise, Jeff. Programming Windows with MFC 2nd Ed.
Microsoft Press 1999
ISBN: 1-57231-695-0
A: Domain Driven Design by Eric Evans
A: Amiga ROM Kernel Manuals :)
A: This might not count as a "development book" but I have to throw it in anyway: Hackers by Stephen Levy. I found that it spoke to the emotional side of programming.
A: Separately, I'd mention The Third Manifesto by Hugh Darwen and CJ Date. If you're interested in understanding data (which seems uncommon among programmers) this book is a must-read. It will also make you sad when you realize just how badly broken SQL is, but it'll also help you cope with that brokenness. Knowing how a tool is broken lets you design with those deficits in mind.
A: Another book that has not been mentioned yet, and SHOULD be required reading for EVERY programmer, newbies on up to gurus, in ANY programming language, is Michael Howard's Writing Secure Code (2nd Edition) from MSPress.
A: As so many people have listed Head First Design Patterns, which I agree is a very good book, I would like to see if so many people aware of a title called Design Patterns Explained: A New Perspective on Object-Oriented Design.
This title deals with design patterns excellently. The first half of the book is very accessible and the remaining chapters require only a firm grasp of the content already covered The reason I feel the second half of the book is less accessible is that it covers patterns that I, as a young developer admittedly lacking in experience, have not used much.
This title also introduces the concept behind design patterns, covering Christopher Alexander's initial work in architecture to the GoF first implementing documenting patterns in SmallTalk.
I think that anyone who enjoyed Head First Design Patterns but still finds the GoF very dry, should look into Design Patterns Explained as a much more readable (although not quite as comprehensive) alternative.
A: Craig Larman's Applying UML and Patterns. While the Gang of Four book Design Patterns is very instructive, I found that I didn't "get" how to use design patterns until I ran across Larman's book in a programming class.
A: Advanced MS-DOS by Ray Duncan.
A: for low level entertainment i would suggest Michael Abrash's
i) -Zen of Code Optimization- and
ii) -Graphics Programming Black Book-
even if you dont do any graphics programming.
A: I would say that "Beyond Code - Learn to Distinguish Yourself in 9 Simple Steps" is quite a good and motivational book. I doesn't cover technical issues, but it describes ways of working with people, being professional, ... For me, this is a book you can read again and again if you are in need of some pep talk. Besides that, it is cheap and very easy and enjoyable to read in 3 to 4 hours.
There is a little review over at my blog.
A: I saw a review of Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools on a blog talking also about XI-Factory, I read it and I must say this book is a must read. Altough not specifically targetted to programmers, it explains very clearly what is happening in the programming world right now with Model-Driven Architecture and so on..
A: I'm reading now Agile Software Development, Principles, Patterns and Practices. For those interested in XP and Object-Oriented Design, this is a classic reading.
alt text http://ecx.images-amazon.com/images/I/519J3P8ANML._SL500_AA240_.jpg
A: Solid Code Optimizing the Software Development Life Cycle
Although the book is only 300 pages and favors Microsoft technologies it still offers some good language agnostic tidbits.
A: Domain Driven Design By Eric Evans is a wonderful book!
A: What happened to 'Expert C Programming - Deep C Secrets' by Peter Van Der Linden - a classical and enjoyable read. Should have read that immediately after learning C years ago but got it about after 3 years into learning C! A recommended book which answers the most common SO questions on pointers (a favourite subject of mine). Live it, eat it, breathe it! 10/10!
A: What Every Programmer Should Know About Memory
by Ulrich Drepper - explains the structure of modern memory subsystems and suggests how to utilize them efficiently.
PS: Sorry If I am double posting.
A: 97 Things Every Programmer Should Know
alt text http://ecx.images-amazon.com/images/I/51F134Q8TrL._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA240_SH20_OU01_.jpg
This book pools together the collective experiences of some of the world's best programmers. It is a must read.
A: Steve Macguire's Writing Solid Code
A: In the beginning was the command line. Neal Stephenson.
A: *
*Code Complete (2nd edition) by Steve McConnell
*The Pragmatic Programmer
*Structure and Interpretation of Computer Programs
*The C Programming Language by Kernighan and Ritchie
*Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
*Design Patterns by the Gang of Four
*Refactoring: Improving the Design of Existing Code
*The Mythical Man Month
*The Art of Computer Programming by Donald Knuth
*Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
*Gödel, Escher, Bach by Douglas Hofstadter
*Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
*Effective C++
*More Effective C++
*CODE by Charles Petzold
*Programming Pearls by Jon Bentley
*Working Effectively with Legacy Code by Michael C. Feathers
*Peopleware by Demarco and Lister
*Coders at Work by Peter Seibel
*Surely You're Joking, Mr. Feynman!
*Effective Java 2nd edition
*Patterns of Enterprise Application Architecture by Martin Fowler
*The Little Schemer
*The Seasoned Schemer
*Why's (Poignant) Guide to Ruby
*The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
*The Art of Unix Programming
*Test-Driven Development: By Example by Kent Beck
*Practices of an Agile Developer
*Don't Make Me Think
*Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
*Domain Driven Designs by Eric Evans
*The Design of Everyday Things by Donald Norman
*Modern C++ Design by Andrei Alexandrescu
*Best Software Writing I by Joel Spolsky
*The Practice of Programming by Kernighan and Pike
*Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
*Software Estimation: Demystifying the Black Art by Steve McConnel
*The Passionate Programmer (My Job Went To India) by Chad Fowler
*Hackers: Heroes of the Computer Revolution
*Algorithms + Data Structures = Programs
*Writing Solid Code
*JavaScript - The Good Parts
*Getting Real by 37 Signals
*Foundations of Programming by Karl Seguin
*Computer Graphics: Principles and Practice in C (2nd Edition)
*Thinking in Java by Bruce Eckel
*The Elements of Computing Systems
*Refactoring to Patterns by Joshua Kerievsky
*Modern Operating Systems by Andrew S. Tanenbaum
*The Annotated Turing
*Things That Make Us Smart by Donald Norman
*The Timeless Way of Building by Christopher Alexander
*The Deadline: A Novel About Project Management by Tom DeMarco
*The C++ Programming Language (3rd edition) by Stroustrup
*Patterns of Enterprise Application Architecture
*Computer Systems - A Programmer's Perspective
*Agile Principles, Patterns, and Practices in C# by Robert C. Martin
*Growing Object-Oriented Software, Guided by Tests
*Framework Design Guidelines by Brad Abrams
*Object Thinking by Dr. David West
*Advanced Programming in the UNIX Environment by W. Richard Stevens
*Hackers and Painters: Big Ideas from the Computer Age
*The Soul of a New Machine by Tracy Kidder
*CLR via C# by Jeffrey Richter
*The Timeless Way of Building by Christopher Alexander
*Design Patterns in C# by Steve Metsker
*Alice in Wonderland by Lewis Carol
*Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
*About Face - The Essentials of Interaction Design
*Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
*The Tao of Programming
*Computational Beauty of Nature
*Writing Solid Code by Steve Maguire
*Philip and Alex's Guide to Web Publishing
*Object-Oriented Analysis and Design with Applications by Grady Booch
*Effective Java by Joshua Bloch
*Computability by N. J. Cutland
*Masterminds of Programming
*The Tao Te Ching
*The Productive Programmer
*The Art of Deception by Kevin Mitnick
*The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
*Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
*Masters of Doom
*Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
*How To Solve It by George Polya
*The Alchemist by Paulo Coelho
*Smalltalk-80: The Language and its Implementation
*Writing Secure Code (2nd Edition) by Michael Howard
*Introduction to Functional Programming by Philip Wadler and Richard Bird
*No Bugs! by David Thielen
*Rework by Jason Freid and DHH
*JUnit in Action
A: Implementation Patterns by Kent Beck.
alt text http://ecx.images-amazon.com/images/I/51JHn-6oNwL._SL500_AA240_.jpg
You can learn how to communicate people with programming.
A: Deitel and Deitel, "C++: How to Program"
XUnit Test Patterns
A: Code is Law - you are doing all this writing, editing, and thinking in [language of your choice] but WHY? What does you code MEAN? What will does it actually DO?
(I could have recommended a book on QA, but I didn't...)
A: Pro Spring is a superb introduction to the world of Inversion of Control and Dependency Injection. If you're not aware of these practices and their implications - the balance of topics and technical detail in Pro Spring is excellent. It builds a great case and consequent personal foundation.
Another book I'd suggest would be Robert Martin's Agile Software Development (ASD). Code smells, agile techniques, test driven dev, principles ... a well-written balance of many different programming facets.
More traditional classics would include the infamous GoF Design Patterns, Bertrand Meyer's Object Oriented Software Construction, Booch's Object Oriented Analysis and Design, Scott Meyer's "Effective C++'" series and a lesser known book I enjoyed by Gunderloy, Coder to Developer.
And while books are nice ... don't forget radio!
... let me add one more thing. If you haven't already discovered safari - take a look. It is more addictive than stack overflow :-) I've found that with my google type habits - I need the more expensive subscription so I can look at any book at any time - but I'd recommend the trial to anyone even remotely interested.
(ah yes, a little obj-C today, cocoa tomorrow, patterns? soa? what was that example in that cookbook? What did Steve say in the second edition? Should I buy this book? ... a subscription like this is great if you'd like some continuity and context to what you're googling ...)
A: Here are two I haven't seen mentioned:
I wish I had read "Ruminations on C++" by Koenig and Moo much sooner. That was the book that made OO concepts really click for me.
And I recommend Michael Abrash's "Zen of Code Optimization" for anyone else planning on starting a programming career in the mid 90s.
A: Modern C++ Design by Andrei Alexandrescu
A: Writing Solid Code by Steve Maguire.
A: "Object-Oriented Analysis and Design with Applications" by Grady Booch. I read this a long time ago and it showed me that there could be a methodology to developing Object Oriented Software. Since then many other books have had an impact on me but this one got me started.
A: Mine is Test Driven Development by Example
A: Learning C# 2005, by Jesse Liberty & Brian MacDonald (O'Reilly).
ISBN 10: 0-596-10209-7.
When I first made the jump from ASP classic procedural code to object-oriented C# code in VS2005, this book set me on the right path.
A: Software Tools by Brian W. Kernighan and P. J. Plauger by a wide margin had the most effect on me.
A: Inside the C++ Object Model by Stan Lippman. It made C++ finally "click" for me, before it was all "magic". This book gave me a different frame of mind when approaching a new programming language.
A: Literate Programming by Donald Knuth, it's a great book on code structure.
A: The Productive Programmer by Ford
I'm not quite through this one yet, but I'm already thrilled by some of the tips/tricks I've picked up to become more...well...productive.
Sure, there's plenty of the stuff we all already know (use the keyboard shortcuts, DRY, etc). But there's plenty of new stuff to go with it. And careful readers will quickly start to see how things can be combined for even greater effect.
A: Object Oriented Analysis and Design - by Grady Booch
A: "Thinking in C++" by Bruce Eckel
A: Donald Norman, 'The Design of Everyday Things'
Not about programming, per se, but about how things in the world should work -- kind of the psychology of usability.
It's been invaluable for me in designing both end-user interfaces and APIs.
A: Inside the C++ object model by Stanley Lippman
A: How to think like a computer scientist: learning with python
May not be the most advanced book on the world but it made me understand programming concepts that I couldn't, especially object oriented topics.
A: Agile Software Development with Scrum by Ken Schwaber and Mike Beedle.
I used this book as the starting point to understanding Agile development.
A: The Pragmatic programmer was pretty good. However one that really made an impact when I was starting out was :
Windows 95 System Programming Secrets"
I know - it sounds and looks a bit cheesy on the outside and has probably dated a bit - but this was an awesome explanation of the internals of Win95 based on the Authors (Matt Pietrek) investigations using his own own tools - the code for which came with the book. Bear in mind this was before the whole open source thing and Microsoft was still pretty cagey about releasing documentation of internals - let alone source.
There was some quote in there like "If you are working through some problem and hit some sticking point then you need to stop and really look deeply into that piece and really understand how it works". I've found this to be pretty good advice - particularly these days when you often have the source for a library and can go take a look.
Its also inspired me to enjoy diving into the internals of how systems work, something that has proven invaluable over the course of my career.
Oh and I'd also throw in effective .net - great internals explanation of .Net from Don Box.
A: In recent years it has been 'The C++ Standard Library' by 'Nicolai M. Josuttis'. It's my bible.
alt text http://ecx.images-amazon.com/images/I/51BT5SKXTCL._SL500_AA240_.jpg
A: The first book that made a real impact on me was Mastering Turbo Assembler by Tom Swan.
Other books that have had an impact was Just For Fun by Linus Torvalds and David Diamond and of course The Pragmatic Programmer by Andrew Hunt and David Thomas.
A: If you are doing anything in Unix/Linux/MacOS etc, you must read Advanced Programming in the Unix Environment (also known by the acronym APUE), by the late W Richard Stevens. If you don't know how file descriptors work or what sessions are, or all the things you should do when you daemonize yourself (admit it, you don't), then this book will tell you.
You'll feel amatuerish for a bit afterwards, but if you want to consider yourself a professional programmer (in any language) in the Unix environment you need to read this.
A: Even though I had been programming rofessionally for years, Rocky Lhotka's "Business Objects" series about his CSLA framework was the book that opened my eyes.
His ideas he got me excited about software development patterns and theory again. It set me on the path of a new interest in learning how to be a better developer, and not just learning about the latest gee-whiz control or library. (Don't get me wrong, I still love a good technical book too - you gotta keep up!)
A: I found the The Algorithm Design Manual to be a very beneficial read. I also highly recommend Programming Pearls.
A: "The Fortran Coloring Book" by Dr. Roger Kaufman (1978, ISBN:0262610264)
What a silly concept - more basic than even a "Dummies" book! But it works for any language (with a few fortran specific examples of course), explaining the basic concepts of logic, variables, i/o, etc. in a very understandable and "Painfully Funny" way.
It's enough to get a ten year old interested in programming...
(Found cover photo on a Flickr user account)
A: recommended for Windows Programmer, Programming Windows
A: Anything by Edward Tufte: The Visual Display of Quantitative Information; Envisioning Information; Visual Explanations
A: OK, so the question is not "what's the best programming book", but "if you could tell yourself what to read in the beginning of your career"...
Probably one of "On Lisp" and SICP, plus one of CLRS or "Algorithms: a creative approach" by Udi Manber.
Introduction to Algorithms by Udi Manber http://vig-fp.prenhall.com/bigcovers/0201120372.jpg
The first two will teach lots of programming techniques, patterns, and really open up one's mind to his/her own creativity; the other two are different. They're more theoretical, but also very important, focusing on design of correct and efficient algorithms (and requiring substantially more math).
I see lots of people recommending the three first books when the subject of "good programming books" pops up, but the last one (by Manber) is a great book, and few people know it. It's a shame! Manber focuses on the incremental development of algorithms through theorem proving using induction.
A: If you write code in C then Expert C Programming is an eye opener. It has answers to all the things you wondered why it works this way. Peter Van Der Linden has a great writing style and makes arcane concepts very readable. A must read for all C developers
A: Fortran IV with Watfor and Watfiv by Cress, Dirkson and Graham.
This book taught me my first programming language that I programmed onto punch cards at the time. After 3 years, the book was all tatters because I had used it so much.
alt text http://g-ecx.images-amazon.com/images/G/01/ciu/4b/83/245d9833e7a03768eaf63110._AA240_.L.jpg
Fortran was a great language! It had a super optimizer and produced very fast code. It is still very popular in Great Britain and FTN95 is now a very full-featured and capable compiler. I sometimes wish I could have continued to use it, but Delphi is a more than adequate replacement.
A: Graphics Programming in Windows is difficult to fault.
A: Etudes for Programmers by Charles Wetherell, More Programming Pearls (Jon Bently),
A: The Scelbi-Byte Primer
I pored over the source code listings in this book many times until, one day, I suddenly grokked 8080 assembly language programming.
A: *
*Game Architecture and Design: Learn the Best Practices for Game Design and Programming
Even though i've never programmed a game this book helped me understand a lot of things in a fun way.
A: *
*Professional JSP 2nd Edition
I bough this when I was a complete newbie and took me from only knowing that Java existed to a reliable team member in a short time
A: Still a worthwhile classic is the Interface Hall of Shame. This website detailed a huge assortment of interface design faux pas that is quite entertaining. The original iarchitect.com no longer exists, but others have re-established the HOS on their own websites.
A: Object Oriented Design Heuristics is a great read. I couldn't put it down.
A: I'll add a couple that I haven't seen here that are influential for me:
*
*Yourdon and Constantine, "Structured Design". Everything you need to know about software design is in here, if you're willing to dig for it a little.
*Leonard Koren, "Wabi-Sabi: for Artists, Designers, Poets & Philosophers". A pragmatic philosophy balancing beauty and pragmatism.
A: How to Solve It: A new aspect of mathematical method
Although not directly related to computer programming but it does teach you the art of problem solving and that's what computer programming is all about.
A: An introduction to GW Basic. With out it I never would have learned how to program and any other books wouldn't have done me any good.
A: Algorithms in C++ was invaluable to me in learning Big O notation and the ins and outs of the various sort algorithms. This was published before Sedgewick decided he could make more money by dividing it into 5 different books.
C++ FAQs is an amazing book that really shows you what you should and shouldn't be doing in C++. The backward compatibility of C++ leaves a lot of landmines about and this book helps one carefully avoid them while at the same time being a good introduction into OO design and intent.
A: It seems most people have already touched on the some very good books. One which really helped me out was Effective C#: 50 Ways to Improve your C#. I'd be remiss if I didn't mention The Tao of Pooh. Philosophy books can be good for the soul, and the code.
A: One I didn't already see on here was xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros. This book really helped me see unit testing from a fresh perspective.
A: I'm late to this question but apparently still have something unique to offer... Software Engineering Economics by Barry Boehm which, to summarize, says that if you want to really improve software productivity get better people since better tools, hardware, languages, methods, etc. will all have a marginal impact. Only better people drive up productivity by significant amounts. I emphasize, this is better engineers, not more engineers!
Not the kind of book you'd take to bed with you, like you might do with Coders At Work but the kind of book that drives home a lesson that our industry has struggled mightily to take to heart. Witness off-shoring, a false economy that Boehm's model predicts will have only a marginal positive effect, if any at all. Check it out.
A: Essential reading for any mentor/team leader/manager or anyone who reports to the aforementioned.
alt text http://ecx.images-amazon.com/images/I/316N6QYW32L._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA240_SH20_OU01_.jpg
A: This is a must read book for every programmer: Database system concepts by Abraham Silberschatz.
alt text http://images.barnesandnoble.com/images/14870000/14878097.JPG
A: This is a very rich and useful compilation, however, I am a bit surprised I have not encountered Andrew S. Tanenbaum among the authors. IMO he is one of the best CS professors, and his genius has to do mainly with his extraordinary ability in making rather difficult material accessible to the CS undergraduates. His books (Modern Operating Systems, or Computer Networks might ring a bell) did a wonderful job in providing me with a solid foundation in CS while doing my BS and I highly recommend them.
Some other interesting stuff on Tanenbaum, proving his skills go beyond teaching: author of an OS called MINIX - Linus had his fare share of inspiration from it when implementing Linux; Amoeba - distributed OS; Turtle - free anonymous p2p network.
A: The Art of Game Design - A Book of Lenses by Jesse Schell
Jesse Schell has taught Game Design and led research projects at Carnegie Mellon’s Entertainment Technology Center since 2002.
Nuff said.
The Art of Game Design - A Book of Lenses http://i50.tinypic.com/iekw0l.jpg
PS: Sorry If I am double posting, I couldn't find this book in the answers - either because the title was not exact or there was no image. Let me know and I'll delete it if so.
A: Mr Bunny's Guide to ActiveX
A:
Programmer's Guide to the IBM PC. The Pink Shirt book.
...well, someone had to say it.
A: You.Next(): Move Your Software Development Career to the Leadership Track
~ Michael C. Finley (Author), Honza Fedák (Author)
link text
A: Maverick!: The Success Story Behind the World's Most Unusual Workplace
alt text http://ecx.images-amazon.com/images/I/410TX7YN94L._SL500_AA300_.jpg
Will make you realise what a workplace should be like.
A: Refactoring
Patterns of Enterprise Application Architecture
A: Code Craft
A: I have a couple of (rather old) blog posts on this subject
*
*http://www.spindriftpages.net/blog/dave/2005/11/17/c-books/
*http://www.spindriftpages.net/blog/dave/2005/06/06/good-oo-books/
*http://www.spindriftpages.net/blog/dave/2005/05/11/really-great-it-books/
*Although a good book I found Code
Complete to be rather a dull read (a
controversial view I admit).
*I like
Jeffery Richter and the books Joel
Spolksy has written
*The Eric Meyer CSS books are really good too
A: SQL for smarties
A: In addition to other people's suggestions, I'd recommend either acquiring a copy of SICP, or reading it online. It's one of the few books that I've read that I feel greatly increased my skill in designing software, particularly in creating good abstraction layers.
A book that is not directly related to programming, but is also a good read for programmers (IMO) is Concrete Mathematics. Most, if not all of the topics in it are useful for programmers to know about, and it does a better job of explaining things than any other math book I've read to date.
A: Agile Software Development by Alistair Cockburn
A: Not a programming book, but still a very important book every programmer should read:
Orbiting the Giant Hairball by Gordon MacKenzie
A: The Interpretation of Object-Oriented Programming Languages by Ian Craig
Because it showed me how much more there was to OO than standard C++/Java idioms
A: Thinking in Java (Patterns) , Bruce Eckel
A: Professional Excel Development
This book showed how to make high quality applications within one of the most ubiquitous programming platforms available.
A: PHP objects, patterns and practice.
http://www.apress.com/book/view/9781590599099
A: 'How to be a Programmer: A Short, Comprehensive, and Personal Summary' by Robert L Read
Not exactly a book but an essay, but this one was definitely an inspiration for me when I got into coding. Loved the notion of entering a tribe. Worth a read.
A: A collection it was, and stunning. Edsger Dijkstra's (with some help from C.A.R. Hoare) little black book Structured Programming and particlarly the essay titled "On Our Inability To Do Much".
A: The C++ Series of programming books by Deitel and Deitel
A: Managing Gigabytes is an instant classic for thinking about the heavy lifting of information.
A: C# for Experienced Programmers
or really anything from Dietel & Dietel. I have read several of their books, and everything has been awesome.
A: Years ago, Bruce Eckel's Thinking in C++ taught me a great deal about C++ but also the importance of isolating an issue to a small 'sandbox' for study/analysis. This technique has greatly impacted my career and routinely helps me troubleshoot problems both for myself and others.
These days, I refer to Thinking in Java, which is written in the same style. Somehow, the style is beyond mere, simple 'examples' and profoundly gets at the heart of the issue.
I am so grateful that I will buy virtually anything by Eckel, sight unseen.
A: When I first started, there was "Mastering Turbo Pascal" by Tom Swan. There is nothing terribly profound about this book. It was clear and concise with usable examples. Based on this knowledge, I spawned a software development career now 15+ years in.
A: C++ BlackBook. KISS all the way through
A: Mastering C++ from Tom Swan. It was the best kind of book, it had examples which were simple enough to teach concepts but useful enough to solve other problems. It was very readable, it was the first book I read when got to college, and it only needed to be read once.
A: Tenenbaum's first operating systems book. My first look at kernal level programming.
A: "Algorithms in C" (1st edition) by Sedgewick taught me all about algorithms as well as teaching me all about the pitfalls of documentation and copy/pasting code as all the example code in this version was taken from the "Algorithms in Pascal" version and were simply passed through a simple code translator which did not adjust for the different indexing schemes.
A: My all-time favorite was the C# Back Book, by Matthew Telles.
A: Dreaming in Code Has probably had the most profound impact in the last 6 months.
A: "The C++ Programming Language" by Bjarne Stroustrup
A: Actually, two books stand out. The first was Code Complete. Despite its age, this is still a very useful book, and the chapter on the dangers of premature optimisation is worth the price of the book on its own.
The second one was The Psychology of Everyday Things (now called The Design of Everyday Things, I think), which changed the way I think about user interfaces when designing applications. It made me more user-focused.
A: "Writing Solid Code: Microsoft's Techniques for Developing Bug-Free C Programs (Microsoft Programming Series)" by Steve MacGuire.
Interesting what a large proportion the books mentioned here are C/C++ books.
A: For me "Memory as a programming concept in C and C++" really opened my eyes to how memory management really works. If you're a C or C++ developer I consider it a must read. You will defiantly learn something or remember things you might have forgotten along the way.
http://www.amazon.com/Memory-Programming-Concept-C/dp/0521520436
A: SAP ABAP programming?
"Teach Yourself ABAP in 21 Days" is the best book!
It contains no clever tricks or wizardry, but after 3 years, I never came upon a more comprehensive book
A: Schaum's Outline of Programming with C++ by John R Hubbard.
This was the first programming book I read, when I started out with C++. It was gifted to me by someone who saw my interest in programming. The book is very good for beginners - it started from the elementary concepts, went up to templates and vectors. The examples given were pretty relevant. The book made you ponder and ask more questions, and try out things for yourself.
A: How to Solve it by computer http://g-ecx.images-amazon.com/images/G/01/ciu/31/89/d4ac024128a044c186a18010._AA207_.L.jpg - R.G.Dromey
A: Probably "C for Dummies" vol 1, back in 1997 or so. Just an introduction really, but it was a good read after having picked up the taste for programming in GFA Basic on the Atari ST. The Coronado C tutorial around the same time helped too.
A: Michael Abrash The Zen of Assembly Language
A: Applying UML and Design Patterns.
It helped design patterns to click with me, and provided a justification for UML that made sense to me in the phrasing 'UML as Sketch'. Namely that UML should be used as a brief sketch of the system that has the additional benefit of you not having to explain the notation to others (they either already know UML or you give them a UML book to read)
A: The Algorithms book from Robert Sedgewick. A must-read for application developers.
Comes in many flavours (C, C++, Java)
http://www.cs.princeton.edu/~rs/
A: Object-Oriented Programming in Turbo C++. Not super popular, but it was the one that got me started, and was the first book that really helped me grok what an object was. Read this one waaaay back in high school. It sort of brings a tear to my eye...
A: Beginning C# 3.0: An Introduction to Object Oriented Programming
This is the book for those who want to understand the whys and hows of OOP using C# 3.0. You don't want to miss it.
A: Beginning Visual C++
When I first started programming in a OOP languages, I found this book not only to be a comprehensive book about C++ and MFC, it was also has one of the best explanations of Object Oriented concepts I've seen.
When I talk to developers who are just starting out programming in an object oriented language, I tell them to read this book.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1436"
} |
Q: Any experiences with Protocol Buffers? I was just looking through some information about Google's protocol buffers data interchange format. Has anyone played around with the code or even created a project around it?
I'm currently using XML in a Python project for structured content created by hand in a text editor, and I was wondering what the general opinion was on Protocol Buffers as a user-facing input format. The speed and brevity benefits definitely seem to be there, but there are so many factors when it comes to actually generating and processing the data.
A: Another drawback of binary format like PB is that if there is a single bit of error, the entire data file is not parsable, but with JSON or XML, as the last resort you can still manually fix the error because it is human readable and has redundancy built-in..
A: From your brief description, it sounds like protocol buffers is not the right fit. The phrase "structured content created by hand in a text editor" pretty much screams for XML.
But if you want efficient, low latency communications with data structures that are not shared outside your organization, binary serialization such as protocol buffers can offer a huge win.
A: If you are looking for user facing interaction, stick with xml. It has more support, understanding, and general acceptance currently. If it's internal, I would say that protocol buffers are a great idea.
Maybe in a few years as more tools come out to support protocol buffers, then start looking towards that for a public facing api. Until then... JSON?
A: Protocol buffers are intended to optimize communications between machines. They are really not intended for human interaction. Also, the format is binary, so it could not replace XML in that use case.
I would also recommend JSON as being the most compact text-based format.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
} |
Q: How can I get at the matches when using preg_replace in PHP? I am trying to grab the capital letters of a couple of words and wrap them in span tags. I am using preg_replace for extract and wrapping purposes, but it's not outputting anything.
preg_replace("/[A-Z]/", "<span class=\"initial\">$1</span>", $str)
A: From the preg_replace documentation on php.net:
replacement may contain references of
the form \n or (since PHP 4.0.4) $n,
with the latter form being the
preferred one. Every such reference
will be replaced by the text captured
by the n'th parenthesized pattern.
See Flubba's example.
A: You need to put the pattern in parentheses /([A-Z])/, like this:
preg_replace("/([A-Z])/", "<span class=\"initial\">$1</span>", $str)
A: Use parentheses around your desired capture.
A: \0 will also match the entire matched expression without doing an explicit capture using parenthesis.
preg_replace("/[A-Z]/", "<span class=\"initial\">\\0</span>", $str)
As always, you can go to php.net/preg_replace or php.net/<whatever search term> to search the documentation quickly. Quoth the documentation:
\0 or $0 refers to the text matched by the whole pattern.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
} |
Q: .NET unit testing packages I am getting back into a bit more .NET after a few-years of not using it full-time and am wondering what the good unit testing packages are these days.
I'm familiar with NUnit (a few years ago) and have played briefly around with IronRuby, with the goal of getting something like RSpec going, but I don't know much beyond that.
I realize I could google for this and call it a day, but I believe I'm likely to get a better and more informed response from asking a question here :-)
Suggestions?
A: We use NUnit and MbUnit here. We use TestDriven.NET to run the unit tests from within Visual Studio. We use the excellent, highly recommended RhinoMocks as a mock framework.
A: Stick to NUnit. Don't go anywhere near MSTest.
NUnit + ReSharper is an absolute joy to work with.
A: xUnit.net looks like it provides a slightly different approach to NUnit, MbUnit, and MSTest, which is interesting.
In my search for an RSpec-like solution (because I love the RSpec), I also came across NSpec, which looks a bit wordy, but combined with the NSpec Extensions addon to use C# 3 extension methods, it looks pretty nice.
A: I used to use NUnit, but now tend to use MbUnit, for two key features:
1. The RowTest feature allows you to easily run the same test on different sets of parameters, which is important if you really want thorough coverage.
2. The Rollback feature allows you to run tests against your database while rolling back changes after every test, keeping your database in exactly the same state every time. And it's as easy as adding the [Rollback] attribute.
Another nice aspect of MbUnit is that its syntax is nearly identical to NUnit, so if you have a whole test bed already in place under NUnit, you can just switch out the references without the need to change any (very much?) code.
A: There are so many it's crazy. Crazy good, I guess.
*
*For the conservative types (me), NUnit is still available and still more than capable.
*For the Microsoft-types, MSTest is adequate, but it is slow and clunky compared to NUnit. It also lacks code coverage without paying the big bucks for the pricey versions of Visual Studio.
*There's also MbUnit. It's like NUnit, but it has nifty features like RowTest (run the same test with different parameters) and Rollback (put the database back like you found it after a test).
*And finally, xUnit.net is the trendy option with some attitude.
*Oh, and TestDriven.NET will give you IDE integration for both NUnit and MbUnit.
I'm sure they're all just fine. I'd steer away from MSTest though, unless you just enjoy the convenience of having everything in one IDE out of the box.
Scott Hanselman has a podcast on this very topic.
A: I use the following:
TestDriven.NET - Unit testing add on for Visual Studio
Typemock Isolator- Mocking framework for .NET unit testing
NUnit - An open source unit testing framework that is in C#.
A: You might find it interesting that Gallio v3.1 now supports RSpec via IronRuby.
A: I like TestDriven.NET (even though I use ReSharper) and I'm pretty happy with XUnit.net. It uses Facts instead of Tests which many people dislike but I like the difference in terminology. It's useful to think of a collection of automatically provable Facts about your software and see which ones you violate when you make a change.
Be aware that Visual Studio 2008 Professional (and above) now comes with integrated Unit Testing (it used to be available only with the Team System Editions) and may be suitable for your needs.
A: I used to use NUnit, but I switched to MbUnit since it has more features.
I love RowTest. It lets you parametrize your tests. NUnit does have a little bit better tool support though. I am using ReSharper to run MbUnit tests. I've had problems with TestDriven.NET running my SetUp methods for MbUnit.
A: NUnit, MSTest, etc. all do pretty much the same thing. However, I find NMock indispensable.
NMock or any mocking package is not unit testing, but it makes it so much easier to do unit testing that it might as well be.
A: I like MbUnit, er, Gallio. Most importantly to me is having good tools support inside Visual Studio. For that I use Resharper, which has an MbUnit test runner. A lot of folks seem to like TestDriven.NET as their test runner as well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: Why is my ternary expression not working? I am trying to set a flag to show or hide a page element, but it always displays even when the expression is false.
$canMerge = ($condition1 && $condition2) ? 'true' : 'false';
...
<?php if ($canMerge) { ?>Stuff<?php } ?>
What's up?
A: This is broken because 'false' as a string will evaluate to true as a boolean.
However, this is an unneeded ternary expression, because the resulting values are simple true and false. This would be equivalent:
$canMerge = ($condition1 && $condition2);
A: The value of 'false' is true. You need to remove the quotes:
$canMerge = ($condition1 && $condition2) ? true : false;
A: Seems to me a reasonable question especially because of the discrepancy in the way PHP works.
For instance, the following code will output 'its false'
$a = '0';
if($a)
{
echo 'its true';
}
else
{
echo 'its false';
}
A: You are using 'true' and 'false' as string. Using a string(non-empty and not '0' and not ' ', because these are empty strings and will be assume as false) as a condition will results the condition to be true.
I will write some correct conditions that could be use:
$canMerge = ($condition1 && $condition2);
$canMerge = ($condition1 && $condition2) ? true : false;
$canMerge = ($condition1 && $condition2) ? 'true' : 'false';
...
<?php if ($canMerge == 'true') { ?>Stuff<?php } ?>
A: $canMerge = ($condition1 && $condition2);
then
if ($canMerge){
echo "Stuff";
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Federated (Synced) Subversion servers? Is it possible to create "federated" Subversion servers?
As in one server at location A and another at location B that sync up their local versions of the repository automatically. That way when someone at either location interacts with the repository they are accessing their respective local server and therefore has faster response times.
A: Subversion 1.5 introduced write through proxy support for webdav servers over the existing SvnSync support that was added in 1.4. This allows you to have local mirrors for retrieving files and history, but commits are committed directly to the master repository. If setup correctly the local mirrors receive the changes immediately.
See the Svn Book for more details.
A: This is more or less the perfect use case for SVK. SVK is a command line front end for subversion that works with an entire local copy of the repository. So your commits, updates, etc. work on the local repository and you can then sync with a master. I would generally recommend SVK over plain subversion anyway as it makes a lot of things nicer. No .svn folders, better branching and merging, better conflict resolution.
A: Sounds like you might like Git. There's a Google Talk explaining all about it.
A: Its probably not exactly what your looking for, but you may be able to implement OS level clustering.
A: There are different ways to implement replication of SVN repositories without using external tools such as SVK (which appears to be abandoned today). The replication of Subversion repositories and servers helps to address the challenges of distributed workflow, and brings performance improvements for geographically distributed teams.
Subversion supports write-through proxying based on hook scripts and the svnsync tool. And if you use VisualSVN Server you can use VisualSVN Distributed File System (VDFS) that has several major benefits over svnsync. For example, VDFS is much faster than svnsync and fully supports locking.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: How do I make a menu that does not require the user to press [enter] to make a selection? I've got a menu in Python. That part was easy. I'm using raw_input() to get the selection from the user.
The problem is that raw_input (and input) require the user to press Enter after they make a selection. Is there any way to make the program act immediately upon a keystroke? Here's what I've got so far:
import sys
print """Menu
1) Say Foo
2) Say Bar"""
answer = raw_input("Make a selection> ")
if "1" in answer: print "foo"
elif "2" in answer: print "bar"
It would be great to have something like
print menu
while lastKey = "":
lastKey = check_for_recent_keystrokes()
if "1" in lastKey: #do stuff...
A: On Linux:
*
*set raw mode
*select and read the keystroke
*restore normal settings
import sys
import select
import termios
import tty
def getkey():
old_settings = termios.tcgetattr(sys.stdin)
tty.setraw(sys.stdin.fileno())
select.select([sys.stdin], [], [], 0)
answer = sys.stdin.read(1)
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)
return answer
print """Menu
1) Say Foo
2) Say Bar"""
answer=getkey()
if "1" in answer: print "foo"
elif "2" in answer: print "bar"
A: Wow, that took forever. Ok, here's what I've ended up with
#!C:\python25\python.exe
import msvcrt
print """Menu
1) Say Foo
2) Say Bar"""
while 1:
char = msvcrt.getch()
if char == chr(27): #escape
break
if char == "1":
print "foo"
break
if char == "2":
print "Bar"
break
It fails hard using IDLE, the python...thing...that comes with python. But once I tried it in DOS (er, CMD.exe), as a real program, then it ran fine.
No one try it in IDLE, unless you have Task Manager handy.
I've already forgotten how I lived with menus that arn't super-instant responsive.
A: On Windows:
import msvcrt
answer=msvcrt.getch()
A: The reason msvcrt fails in IDLE is because IDLE is not accessing the library that runs msvcrt. Whereas when you run the program natively in cmd.exe it works nicely. For the same reason that your program blows up on Mac and Linux terminals.
But I guess if you're going to be using this specifically for windows, more power to ya.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Wiggling the mouse OK. This is a bit of a vanity app, but I had a situation today at work where I was in a training class and the machine was set to lock every 10 minutes. Well, if the trainers got excited about talking - as opposed to changing slides - the machine would lock up.
I'd like to write a teeny app that has nothing but a taskbar icon that does nothing but move the mouse by 1 pixel every 4 minutes.
I can do that in 3 ways with Delphi (my strong language) but I'm moving to C# for work and I'd like to know the path of least resistance there.
A: Something like this should work (though, you will want to change the interval).
public Form1()
{
InitializeComponent();
Timer Every4Minutes = new Timer();
Every4Minutes.Interval = 10;
Every4Minutes.Tick += new EventHandler(MoveNow);
Every4Minutes.Start();
}
void MoveNow(object sender, EventArgs e)
{
Cursor.Position = new Point(Cursor.Position.X - 1, Cursor.Position.Y - 1);
}
A: for C# 3.5
without notifyicon therefore you will need to terminate this application in task manager manually
using System;
using System.Drawing;
using System.Windows.Forms;
static class Program
{
static void Main()
{
Timer timer = new Timer();
// timer.Interval = 4 minutes
timer.Interval = (int)(TimeSpan.TicksPerMinute * 4 / TimeSpan.TicksPerMillisecond);
timer.Tick += (sender, args) => { Cursor.Position = new Point(Cursor.Position.X + 1, Cursor.Position.Y + 1); };
timer.Start();
Application.Run();
}
}
A: The "correct" way to do this is to respond to the WM_SYSCOMMAND message. In C# this looks something like this:
protected override void WndProc(ref Message m)
{
// Abort screensaver and monitor power-down
const int WM_SYSCOMMAND = 0x0112;
const int SC_MONITOR_POWER = 0xF170;
const int SC_SCREENSAVE = 0xF140;
int WParam = (m.WParam.ToInt32() & 0xFFF0);
if (m.Msg == WM_SYSCOMMAND &&
(WParam == SC_MONITOR_POWER || WParam == SC_SCREENSAVE)) return;
base.WndProc(ref m);
}
According to MSDN, if the screensaver password is enabled by policy on Vista or above, this won't work. Presumably programmatically moving the mouse is also ignored, though I have not tested this.
A: When I work from home, I do this by tying the mouse cord to a desktop fan which oscillates left to right. It keeps the mouse moving and keeps the workstation from going to sleep.
A: (Windows 10 / .Net 5 / C# 9.0)
Instead of faking activity, you could
inform the system that it is in use, thereby preventing the system
from entering sleep or turning off the display while the application
is running
using SetThreadExecutionState, as described on PInvoke.net :
using System;
using System.Runtime.InteropServices;
using System.Threading;
namespace VanityApp
{
internal static class Program
{
[DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)]
private static extern ExecutionState SetThreadExecutionState(ExecutionState esFlags);
[Flags]
private enum ExecutionState : uint
{
ES_AWAYMODE_REQUIRED = 0x00000040,
ES_CONTINUOUS = 0x80000000,
ES_DISPLAY_REQUIRED = 0x00000002,
ES_SYSTEM_REQUIRED = 0x00000001
}
private static void Main()
{
using AutoResetEvent autoResetEvent = new AutoResetEvent(false);
using Timer timer = new Timer(state => SetThreadExecutionState(ExecutionState.ES_AWAYMODE_REQUIRED | ExecutionState.ES_CONTINUOUS | ExecutionState.ES_DISPLAY_REQUIRED | ExecutionState.ES_SYSTEM_REQUIRED), autoResetEvent, 0, -1);
autoResetEvent.WaitOne();
}
}
}
The Timer is a System.Threading.Timer, with its handy constructor, and it uses AutoResetEvent.WaitOne() to avoid exiting immediately.
A: Raf provided a graceful answer to the problem for Win10 world, but unfortunately, his autoResetEvent.WaitOne() instruction blocks the thread, and therefore it must be in a separate thread of its own.
What worked for me can actually run in the main thread, the code doesn't have to be placed in the Main() method, and you can actually have a button to enable this functionality and one to disable it.
First, you certainly need to define the execution state flags:
[Flags]
private enum ExecutionState : uint // options to control monitor behavior
{
ES_AWAYMODE_REQUIRED = 0x00000040, // prevent idle-to-sleep
ES_CONTINUOUS = 0x80000000, // allow monitor power down
ES_DISPLAY_REQUIRED = 0x00000002, // prevent monitor power down
ES_SYSTEM_REQUIRED = 0x00000001 // keep system awake
}
Now, whenever you want to keep your system awake and block your monitor from turning off or idling to sleep, all you need to do, is execute a single command:
SetThreadExecutionState(ExecutionState.ES_AWAYMODE_REQUIRED | ExecutionState.ES_CONTINUOUS | ExecutionState.ES_DISPLAY_REQUIRED | ExecutionState.ES_SYSTEM_REQUIRED);
Then, if you want to undo this action and return your system back to its original execution state, just issue the following command:
SetThreadExecutionState(ExecutionState.ES_CONTINUOUS);
Keep in mind, each command will return the previous execution state, which means, when you first alter this state, you can cache the returned value locally and use it if/when you want to restore the previous state.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: Locating Text within image I am currently working on a project and my goal is to locate text in an image. OCR'ing the text is not my intention as of yet. I want to basically obtain the bounds of text within an image. I am using the AForge.Net imaging component for manipulation. Any assistance in some sense or another?
Update 2/5/09:
I've since went along another route in my project. However I did attempt to obtain text using MODI (Microsoft Office Document Imaging). It allows you to OCR an image and pull text from it with some ease.
A: recognizing text inside an image is indeed a hot topic for researchers in that field, but only begun to grow out of control when captcha's became the "norm" in terms of defense against spam bots. Why use captcha's as protection? well because it is/was very hard to locate (and read) text inside an image!
The reason why I mention captcha's is because the most advancement* is made within that tiny area, and I think that your solution could be best found there.
especially because captcha's are indeed about locating text (or something that resembles text) inside a cluttered image and afterwards trying to read the letters correctly.
so if you can find yourself a good open source captcha breaking tool you probably have all you need to continue your quest...
You could probably even throw away the most dificult code that handles the character recognition itself, because those OCR's are used to read distorted text, something you don't have to do.
*: advancement in terms of visible, usable, and practical information for a "non-researcher"
A: This is an active area of research. There are literally oodles of academic papers on the subject. It's going to be difficult to give you assistance especially w/o more deatails. Are you looking for specific types of text? Fonts? English-only? Are you familiar with the academic literature?
"Text detection" is a standard problem in any OCR (optical character recognition) system and consequently there are lots of bits of code on the interwebs that deal with it.
I could start listing piles of links from google but I suggest you just do a search for "text detection" and start reading :). There is ample example code available as well.
A: If you're ok with using an online API for this, the API at http://www.wisetrend.com/wisetrend_ocr_cloud.shtml can do text detection in addition to just OCR.
A: Stroke width transform can do that for you. That's at least what MS developed for their mobile phone OS. A discussion on the implementation is here at https://stackoverflow.com/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: How to identify which OS Python is running on? What do I need to look at to see whether I'm on Windows or Unix, etc?
A: /usr/bin/python3.2
def cls():
from subprocess import call
from platform import system
os = system()
if os == 'Linux':
call('clear', shell = True)
elif os == 'Windows':
call('cls', shell = True)
A: For Jython the only way to get os name I found is to check os.name Java property (tried with sys, os and platform modules for Jython 2.5.3 on WinXP):
def get_os_platform():
"""return platform name, but for Jython it uses os.name Java property"""
ver = sys.platform.lower()
if ver.startswith('java'):
import java.lang
ver = java.lang.System.getProperty("os.name").lower()
print('platform: %s' % (ver))
return ver
A: Interesting results on windows 8:
>>> import os
>>> os.name
'nt'
>>> import platform
>>> platform.system()
'Windows'
>>> platform.release()
'post2008Server'
Edit: That's a bug
A: Watch out if you're on Windows with Cygwin where os.name is posix.
>>> import os, platform
>>> print os.name
posix
>>> print platform.system()
CYGWIN_NT-6.3-WOW
A: I know this is an old question but I believe that my answer is one that might be helpful to some people who are looking for an easy, simple to understand pythonic way to detect OS in their code. Tested on python3.7
from sys import platform
class UnsupportedPlatform(Exception):
pass
if "linux" in platform:
print("linux")
elif "darwin" in platform:
print("mac")
elif "win" in platform:
print("windows")
else:
raise UnsupportedPlatform
A: I started a bit more systematic listing of what values you can expect using the various modules (feel free to edit and add your system):
Linux (64bit) + WSL
x86_64 aarch64
------ -------
os.name posix posix
sys.platform linux linux
platform.system() Linux Linux
sysconfig.get_platform() linux-x86_64 linux-aarch64
platform.machine() x86_64 aarch64
platform.architecture() ('64bit', '') ('64bit', 'ELF')
*
*tried with archlinux and mint, got same results
*on python2 sys.platform is suffixed by kernel version, e.g. linux2, everything else stays identical
*same output on Windows Subsystem for Linux (tried with ubuntu 18.04 LTS), except platform.architecture() = ('64bit', 'ELF')
WINDOWS (64bit)
(with 32bit column running in the 32bit subsystem)
official python installer 64bit 32bit
------------------------- ----- -----
os.name nt nt
sys.platform win32 win32
platform.system() Windows Windows
sysconfig.get_platform() win-amd64 win32
platform.machine() AMD64 AMD64
platform.architecture() ('64bit', 'WindowsPE') ('64bit', 'WindowsPE')
msys2 64bit 32bit
----- ----- -----
os.name posix posix
sys.platform msys msys
platform.system() MSYS_NT-10.0 MSYS_NT-10.0-WOW
sysconfig.get_platform() msys-2.11.2-x86_64 msys-2.11.2-i686
platform.machine() x86_64 i686
platform.architecture() ('64bit', 'WindowsPE') ('32bit', 'WindowsPE')
msys2 mingw-w64-x86_64-python3 mingw-w64-i686-python3
----- ------------------------ ----------------------
os.name nt nt
sys.platform win32 win32
platform.system() Windows Windows
sysconfig.get_platform() mingw mingw
platform.machine() AMD64 AMD64
platform.architecture() ('64bit', 'WindowsPE') ('32bit', 'WindowsPE')
cygwin 64bit 32bit
------ ----- -----
os.name posix posix
sys.platform cygwin cygwin
platform.system() CYGWIN_NT-10.0 CYGWIN_NT-10.0-WOW
sysconfig.get_platform() cygwin-3.0.1-x86_64 cygwin-3.0.1-i686
platform.machine() x86_64 i686
platform.architecture() ('64bit', 'WindowsPE') ('32bit', 'WindowsPE')
Some remarks:
*
*there is also distutils.util.get_platform() which is identical to `sysconfig.get_platform
*anaconda on windows is same as official python windows installer
*I don't have a Mac nor a true 32bit system and was not motivated to do it online
To compare with your system, simply run this script (and please append results here if missing :)
from __future__ import print_function
import os
import sys
import platform
import sysconfig
print("os.name ", os.name)
print("sys.platform ", sys.platform)
print("platform.system() ", platform.system())
print("sysconfig.get_platform() ", sysconfig.get_platform())
print("platform.machine() ", platform.machine())
print("platform.architecture() ", platform.architecture())
A: If you not looking for the kernel version etc, but looking for the linux distribution you may want to use the following
in python2.6+
>>> import platform
>>> print platform.linux_distribution()
('CentOS Linux', '6.0', 'Final')
>>> print platform.linux_distribution()[0]
CentOS Linux
>>> print platform.linux_distribution()[1]
6.0
in python2.4
>>> import platform
>>> print platform.dist()
('centos', '6.0', 'Final')
>>> print platform.dist()[0]
centos
>>> print platform.dist()[1]
6.0
Obviously, this will work only if you are running this on linux. If you want to have more generic script across platforms, you can mix this with code samples given in other answers.
A: try this:
import os
os.uname()
and you can make it :
info=os.uname()
info[0]
info[1]
A: You can also use sys.platform if you already have imported sys and you don't want to import another module
>>> import sys
>>> sys.platform
'linux2'
A: If you want user readable data but still detailed, you can use platform.platform()
>>> import platform
>>> platform.platform()
'Linux-3.3.0-8.fc16.x86_64-x86_64-with-fedora-16-Verne'
Here's a few different possible calls you can make to identify where you are, linux_distribution and dist are removed in recent python versions.
import platform
import sys
def linux_distribution():
try:
return platform.linux_distribution()
except:
return "N/A"
def dist():
try:
return platform.dist()
except:
return "N/A"
print("""Python version: %s
dist: %s
linux_distribution: %s
system: %s
machine: %s
platform: %s
uname: %s
version: %s
mac_ver: %s
""" % (
sys.version.split('\n'),
str(dist()),
linux_distribution(),
platform.system(),
platform.machine(),
platform.platform(),
platform.uname(),
platform.version(),
platform.mac_ver(),
))
The outputs of this script ran on a few different systems (Linux, Windows, Solaris, MacOS) and architectures (x86, x64, Itanium, power pc, sparc) is available here: https://github.com/hpcugent/easybuild/wiki/OS_flavor_name_version
Ubuntu 12.04 server for example gives:
Python version: ['2.6.5 (r265:79063, Oct 1 2012, 22:04:36) ', '[GCC 4.4.3]']
dist: ('Ubuntu', '10.04', 'lucid')
linux_distribution: ('Ubuntu', '10.04', 'lucid')
system: Linux
machine: x86_64
platform: Linux-2.6.32-32-server-x86_64-with-Ubuntu-10.04-lucid
uname: ('Linux', 'xxx', '2.6.32-32-server', '#62-Ubuntu SMP Wed Apr 20 22:07:43 UTC 2011', 'x86_64', '')
version: #62-Ubuntu SMP Wed Apr 20 22:07:43 UTC 2011
mac_ver: ('', ('', '', ''), '')
A: You can also use only platform module without importing os module to get all the information.
>>> import platform
>>> platform.os.name
'posix'
>>> platform.uname()
('Darwin', 'mainframe.local', '15.3.0', 'Darwin Kernel Version 15.3.0: Thu Dec 10 18:40:58 PST 2015; root:xnu-3248.30.4~1/RELEASE_X86_64', 'x86_64', 'i386')
A nice and tidy layout for reporting purpose can be achieved using this line:
for i in zip(['system','node','release','version','machine','processor'],platform.uname()):print i[0],':',i[1]
That gives this output:
system : Darwin
node : mainframe.local
release : 15.3.0
version : Darwin Kernel Version 15.3.0: Thu Dec 10 18:40:58 PST 2015; root:xnu-3248.30.4~1/RELEASE_X86_64
machine : x86_64
processor : i386
What is missing usually is the operating system version but you should know if you are running windows, linux or mac a platform indipendent way is to use this test:
In []: for i in [platform.linux_distribution(),platform.mac_ver(),platform.win32_ver()]:
....: if i[0]:
....: print 'Version: ',i[0]
A: in the same vein....
import platform
is_windows=(platform.system().lower().find("win") > -1)
if(is_windows): lv_dll=LV_dll("my_so_dll.dll")
else: lv_dll=LV_dll("./my_so_dll.so")
A: Check the available tests with module platform and print the answer out for your system:
import platform
print dir(platform)
for x in dir(platform):
if x[0].isalnum():
try:
result = getattr(platform, x)()
print "platform."+x+": "+result
except TypeError:
continue
A: Dang -- Louis Brandy beat me to the punch, but that doesn't mean I can't provide you with the system results for Vista!
>>> import os
>>> os.name
'nt'
>>> import platform
>>> platform.system()
'Windows'
>>> platform.release()
'Vista'
...and I can’t believe no one’s posted one for Windows 10 yet:
>>> import os
>>> os.name
'nt'
>>> import platform
>>> platform.system()
'Windows'
>>> platform.release()
'10'
A: If you are running macOS X and run platform.system() you get darwin
because macOS X is built on Apple's Darwin OS. Darwin is the kernel of macOS X and is essentially macOS X without the GUI.
A: This solution works for both python and jython.
module os_identify.py:
import platform
import os
# This module contains functions to determine the basic type of
# OS we are running on.
# Contrary to the functions in the `os` and `platform` modules,
# these allow to identify the actual basic OS,
# no matter whether running on the `python` or `jython` interpreter.
def is_linux():
try:
platform.linux_distribution()
return True
except:
return False
def is_windows():
try:
platform.win32_ver()
return True
except:
return False
def is_mac():
try:
platform.mac_ver()
return True
except:
return False
def name():
if is_linux():
return "Linux"
elif is_windows():
return "Windows"
elif is_mac():
return "Mac"
else:
return "<unknown>"
Use like this:
import os_identify
print "My OS: " + os_identify.name()
A: How about a new answer:
import psutil
psutil.MACOS #True (OSX is deprecated)
psutil.WINDOWS #False
psutil.LINUX #False
This would be the output if I was using MACOS
A: For the record here's the results on Mac:
>>> import os
>>> os.name
'posix'
>>> import platform
>>> platform.system()
'Darwin'
>>> platform.release()
'8.11.1'
A: Use platform.system()
Returns the system/OS name, such as 'Linux', 'Darwin', 'Java', 'Windows'. An empty string is returned if the value cannot be determined.
import platform
system = platform.system().lower()
is_windows = system == 'windows'
is_linux = system == 'linux'
is_mac = system == 'darwin'
A: >>> import os
>>> os.name
'posix'
>>> import platform
>>> platform.system()
'Linux'
>>> platform.release()
'2.6.22-15-generic'
The output of platform.system() is as follows:
*
*Linux: Linux
*Mac: Darwin
*Windows: Windows
See: platform — Access to underlying platform’s identifying data
A: Sample code to differentiate operating systems using Python:
import sys
if sys.platform.startswith("linux"): # could be "linux", "linux2", "linux3", ...
# linux
elif sys.platform == "darwin":
# MAC OS X
elif os.name == "nt":
# Windows, Cygwin, etc. (either 32-bit or 64-bit)
A: I am using the WLST tool that comes with weblogic, and it doesn't implement the platform package.
wls:/offline> import os
wls:/offline> print os.name
java
wls:/offline> import sys
wls:/offline> print sys.platform
'java1.5.0_11'
Apart from patching the system javaos.py (issue with os.system() on windows 2003 with jdk1.5) (which I can't do, I have to use weblogic out of the box), this is what I use:
def iswindows():
os = java.lang.System.getProperty( "os.name" )
return "win" in os.lower()
A: Short Story
Use platform.system(). It returns Windows, Linux or Darwin (for OSX).
Long Story
There are 3 ways to get OS in Python, each with its own pro and cons:
Method 1
>>> import sys
>>> sys.platform
'win32' # could be 'linux', 'linux2, 'darwin', 'freebsd8' etc
How this works (source): Internally it calls OS APIs to get name of the OS as defined by OS. See here for various OS-specific values.
Pro: No magic, low level.
Con: OS version dependent, so best not to use directly.
Method 2
>>> import os
>>> os.name
'nt' # for Linux and Mac it prints 'posix'
How this works (source): Internally it checks if python has OS-specific modules called posix or nt.
Pro: Simple to check if posix OS
Con: no differentiation between Linux or OSX.
Method 3
>>> import platform
>>> platform.system()
'Windows' # for Linux it prints 'Linux', Mac it prints `'Darwin'
How this works (source): Internally it will eventually call internal OS APIs, get OS version-specific name like 'win32' or 'win16' or 'linux1' and then normalize to more generic names like 'Windows' or 'Linux' or 'Darwin' by applying several heuristics.
Pro: Best portable way for Windows, OSX and Linux.
Con: Python folks must keep normalization heuristic up to date.
Summary
*
*If you want to check if OS is Windows or Linux or OSX then the most reliable way is platform.system().
*If you want to make OS-specific calls but via built-in Python modules posix or nt then use os.name.
*If you want to get raw OS name as supplied by OS itself then use sys.platform.
A: How about a simple Enum implementation like the following? No need for external libs!
import platform
from enum import Enum
class OS(Enum):
def checkPlatform(osName):
return osName.lower()== platform.system().lower()
MAC = checkPlatform("darwin")
LINUX = checkPlatform("linux")
WINDOWS = checkPlatform("windows") #I haven't test this one
Simply you can access with Enum value
if OS.LINUX.value:
print("Cool it is Linux")
P.S It is python3
A: You can look at the code in pyOSinfo which is part of the pip-date package, to get the most relevant OS information, as seen from your Python distribution.
One of the most common reasons people want to check their OS is for terminal compatibility and if certain system commands are available. Unfortunately, the success of this checking is somewhat dependent on your python installation and OS. For example, uname is not available on most Windows python packages. The above python program will show you the output of the most commonly used built-in functions, already provided by os, sys, platform, site.
So the best way to get only the essential code is looking at that as an example. (I guess I could have just pasted it here, but that would not have been politically correct.)
A: I am late to the game but, just in case anybody needs it, this a function I use to make adjustments on my code so it runs on Windows, Linux and MacOs:
import sys
def get_os(osoptions={'linux':'linux','Windows':'win','macos':'darwin'}):
'''
get OS to allow code specifics
'''
opsys = [k for k in osoptions.keys() if sys.platform.lower().find(osoptions[k].lower()) != -1]
try:
return opsys[0]
except:
return 'unknown_OS'
A: there are alot ways to find this the most easiest way to is to use os package
import os
print(os.name)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "894"
} |
Q: Triple Quotes? How do I delimit a databound Javascript string parameter in ASP.NET? How do I delimit a Javascript data-bound string parameter in an anchor OnClick event?
*
*I have an anchor tag in an ASP.NET Repeater control.
*The OnClick event of the anchor contains a call to a Javascript function.
*The Javascript function takes a string for its input parameter.
*The string parameter is populated with a data-bound value from the Repeater.
I need the "double quotes" for the Container.DataItem.
I need the 'single quotes' for the OnClick.
And I still need one more delimiter (triple quotes?) for the input string parameter of the Javascript function call.
Since I can't use 'single quotes' again, how do I ensure the Javascript function knows the input parameter is a string and not an integer?
Without the extra quotes around the input string parameter, the Javascript function thinks I'm passing in an integer.
The anchor:
<a id="aShowHide" onclick='ToggleDisplay(<%# DataBinder.Eval(Container.DataItem, "JobCode") %>);' >Show/Hide</a>
And there is my Javascript:
<script language="JavaScript" type="text/javascript">
/* Shows/Hides the Jobs Div */
function ToggleDisplay(jobCode)
{
/* Each div has its ID set dynamically ('d' plus the JobCode) */
var elem = document.getElementById('d' + jobCode);
if (elem)
{
if (elem.style.display != 'block')
{
elem.style.display = 'block';
elem.style.visibility = 'visible';
}
else
{
elem.style.display = 'none';
elem.style.visibility = 'hidden';
}
}
}
</script>
A: onclick='javascript:ToggleDisplay("<%# DataBinder.Eval(Container.DataItem, "JobCode")%> "); '
Use like above.
A: I had recently similar problem and the only way to solve it was to use plain old HTML codes for single (') and double quotes (").
Source code was total mess of course but it worked.
Try
<a id="aShowHide" onclick='ToggleDisplay("<%# DataBinder.Eval(Container.DataItem, "JobCode") %>");'>Show/Hide</a>
or
<a id="aShowHide" onclick='ToggleDisplay('<%# DataBinder.Eval(Container.DataItem, "JobCode") %>');'>Show/Hide</a>
A:
Without the extra quotes around the input string parameter, the Javascript function thinks I'm passing in an integer.
Can you do some rudimentary string function to force JavaScript into changing it into a string? Like
value = value + ""
A: Try putting the extra text inside the server-side script block and concatenating.
onclick='<%# "ToggleDisplay(""" & DataBinder.Eval(Container.DataItem, "JobCode") & """);" %>'
Edit: I'm pretty sure you could just use double quotes outside the script block as well.
A: Passing variable to function without single quote or double quote
<html>
<head>
</head>
<body>
<script language="javascript">
function hello(id, bu)
{
alert(id+ bu);
}
</script>
<a href ="javascript:
var x = "12";
var y = "fmo";
hello(x,y)">test</a>
</body>
</html>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: Import CSV file to strongly typed data structure in .Net What's the best way to import a CSV file into a strongly-typed data structure?
A: Brian gives a nice solution for converting it to a strongly typed collection.
Most of the CSV parsing methods given don't take into account escaping fields or some of the other subtleties of CSV files (like trimming fields). Here is the code I personally use. It's a bit rough around the edges and has pretty much no error reporting.
public static IList<IList<string>> Parse(string content)
{
IList<IList<string>> records = new List<IList<string>>();
StringReader stringReader = new StringReader(content);
bool inQoutedString = false;
IList<string> record = new List<string>();
StringBuilder fieldBuilder = new StringBuilder();
while (stringReader.Peek() != -1)
{
char readChar = (char)stringReader.Read();
if (readChar == '\n' || (readChar == '\r' && stringReader.Peek() == '\n'))
{
// If it's a \r\n combo consume the \n part and throw it away.
if (readChar == '\r')
{
stringReader.Read();
}
if (inQoutedString)
{
if (readChar == '\r')
{
fieldBuilder.Append('\r');
}
fieldBuilder.Append('\n');
}
else
{
record.Add(fieldBuilder.ToString().TrimEnd());
fieldBuilder = new StringBuilder();
records.Add(record);
record = new List<string>();
inQoutedString = false;
}
}
else if (fieldBuilder.Length == 0 && !inQoutedString)
{
if (char.IsWhiteSpace(readChar))
{
// Ignore leading whitespace
}
else if (readChar == '"')
{
inQoutedString = true;
}
else if (readChar == ',')
{
record.Add(fieldBuilder.ToString().TrimEnd());
fieldBuilder = new StringBuilder();
}
else
{
fieldBuilder.Append(readChar);
}
}
else if (readChar == ',')
{
if (inQoutedString)
{
fieldBuilder.Append(',');
}
else
{
record.Add(fieldBuilder.ToString().TrimEnd());
fieldBuilder = new StringBuilder();
}
}
else if (readChar == '"')
{
if (inQoutedString)
{
if (stringReader.Peek() == '"')
{
stringReader.Read();
fieldBuilder.Append('"');
}
else
{
inQoutedString = false;
}
}
else
{
fieldBuilder.Append(readChar);
}
}
else
{
fieldBuilder.Append(readChar);
}
}
record.Add(fieldBuilder.ToString().TrimEnd());
records.Add(record);
return records;
}
Note that this doesn't handle the edge case of fields not being deliminated by double quotes, but meerley having a quoted string inside of it. See this post for a bit of a better expanation as well as some links to some proper libraries.
A: Microsoft's TextFieldParser is stable and follows RFC 4180 for CSV files. Don't be put off by the Microsoft.VisualBasic namespace; it's a standard component in the .NET Framework, just add a reference to the global Microsoft.VisualBasic assembly.
If you're compiling for Windows (as opposed to Mono) and don't anticipate having to parse "broken" (non-RFC-compliant) CSV files, then this would be the obvious choice, as it's free, unrestricted, stable, and actively supported, most of which cannot be said for FileHelpers.
See also: How to: Read From Comma-Delimited Text Files in Visual Basic for a VB code example.
A: I was bored so i modified some stuff i wrote. It try's to encapsulate the parsing in an OO manner whle cutting down on the amount of iterations through the file, it only iterates once at the top foreach.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
// usage:
// note this wont run as getting streams is not Implemented
// but will get you started
CSVFileParser fileParser = new CSVFileParser();
// TO Do: configure fileparser
PersonParser personParser = new PersonParser(fileParser);
List<Person> persons = new List<Person>();
// if the file is large and there is a good way to limit
// without having to reparse the whole file you can use a
// linq query if you desire
foreach (Person person in personParser.GetPersons())
{
persons.Add(person);
}
// now we have a list of Person objects
}
}
public abstract class CSVParser
{
protected String[] deliniators = { "," };
protected internal IEnumerable<String[]> GetRecords()
{
Stream stream = GetStream();
StreamReader reader = new StreamReader(stream);
String[] aRecord;
while (!reader.EndOfStream)
{
aRecord = reader.ReadLine().Split(deliniators,
StringSplitOptions.None);
yield return aRecord;
}
}
protected abstract Stream GetStream();
}
public class CSVFileParser : CSVParser
{
// to do: add logic to get a stream from a file
protected override Stream GetStream()
{
throw new NotImplementedException();
}
}
public class CSVWebParser : CSVParser
{
// to do: add logic to get a stream from a web request
protected override Stream GetStream()
{
throw new NotImplementedException();
}
}
public class Person
{
public String Name { get; set; }
public String Address { get; set; }
public DateTime DOB { get; set; }
}
public class PersonParser
{
public PersonParser(CSVParser parser)
{
this.Parser = parser;
}
public CSVParser Parser { get; set; }
public IEnumerable<Person> GetPersons()
{
foreach (String[] record in this.Parser.GetRecords())
{
yield return new Person()
{
Name = record[0],
Address = record[1],
DOB = DateTime.Parse(record[2]),
};
}
}
}
}
A: There are two articles on CodeProject that provide code for a solution, one that uses StreamReader and one that imports CSV data using the Microsoft Text Driver.
A: Use an OleDB connection.
String sConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\\InputDirectory\\;Extended Properties='text;HDR=Yes;FMT=Delimited'";
OleDbConnection objConn = new OleDbConnection(sConnectionString);
objConn.Open();
DataTable dt = new DataTable();
OleDbCommand objCmdSelect = new OleDbCommand("SELECT * FROM file.csv", objConn);
OleDbDataAdapter objAdapter1 = new OleDbDataAdapter();
objAdapter1.SelectCommand = objCmdSelect;
objAdapter1.Fill(dt);
objConn.Close();
A: A good simple way to do it is to open the file, and read each line into an array, linked list, data-structure-of-your-choice. Be careful about handling the first line though.
This may be over your head, but there seems to be a direct way to access them as well using a connection string.
Why not try using Python instead of C# or VB? It has a nice CSV module to import that does all the heavy lifting for you.
A: If you're expecting fairly complex scenarios for CSV parsing, don't even think up of rolling our own parser. There are a lot of excellent tools out there, like FileHelpers, or even ones from CodeProject.
The point is this is a fairly common problem and you could bet that a lot of software developers have already thought about and solved this problem.
A: I agree with @NotMyself. FileHelpers is well tested and handles all kinds of edge cases that you'll eventually have to deal with if you do it yourself. Take a look at what FileHelpers does and only write your own if you're absolutely sure that either (1) you will never need to handle the edge cases FileHelpers does, or (2) you love writing this kind of stuff and are going to be overjoyed when you have to parse stuff like this:
1,"Bill","Smith","Supervisor", "No Comment"
2 , 'Drake,' , 'O'Malley',"Janitor,
Oops, I'm not quoted and I'm on a new line!
A: I had to use a CSV parser in .NET for a project this summer and settled on the Microsoft Jet Text Driver. You specify a folder using a connection string, then query a file using a SQL Select statement. You can specify strong types using a schema.ini file. I didn't do this at first, but then I was getting bad results where the type of the data wasn't immediately apparent, such as IP numbers or an entry like "XYQ 3.9 SP1".
One limitation I ran into is that it cannot handle column names above 64 characters; it truncates. This shouldn't be a problem, except I was dealing with very poorly designed input data. It returns an ADO.NET DataSet.
This was the best solution I found. I would be wary of rolling my own CSV parser, since I would probably miss some of the end cases, and I didn't find any other free CSV parsing packages for .NET out there.
EDIT: Also, there can only be one schema.ini file per directory, so I dynamically appended to it to strongly type the needed columns. It will only strongly-type the columns specified, and infer for any unspecified field. I really appreciated this, as I was dealing with importing a fluid 70+ column CSV and didn't want to specify each column, only the misbehaving ones.
A: I typed in some code. The result in the datagridviewer looked good. It parses a single line of text to an arraylist of objects.
enum quotestatus
{
none,
firstquote,
secondquote
}
public static System.Collections.ArrayList Parse(string line,string delimiter)
{
System.Collections.ArrayList ar = new System.Collections.ArrayList();
StringBuilder field = new StringBuilder();
quotestatus status = quotestatus.none;
foreach (char ch in line.ToCharArray())
{
string chOmsch = "char";
if (ch == Convert.ToChar(delimiter))
{
if (status== quotestatus.firstquote)
{
chOmsch = "char";
}
else
{
chOmsch = "delimiter";
}
}
if (ch == Convert.ToChar(34))
{
chOmsch = "quotes";
if (status == quotestatus.firstquote)
{
status = quotestatus.secondquote;
}
if (status == quotestatus.none )
{
status = quotestatus.firstquote;
}
}
switch (chOmsch)
{
case "char":
field.Append(ch);
break;
case "delimiter":
ar.Add(field.ToString());
field.Clear();
break;
case "quotes":
if (status==quotestatus.firstquote)
{
field.Clear();
}
if (status== quotestatus.secondquote)
{
status =quotestatus.none;
}
break;
}
}
if (field.Length != 0)
{
ar.Add(field.ToString());
}
return ar;
}
A: If you can guarantee that there are no commas in the data, then the simplest way would probably be to use String.split.
For example:
String[] values = myString.Split(',');
myObject.StringField = values[0];
myObject.IntField = Int32.Parse(values[1]);
There may be libraries you could use to help, but that's probably as simple as you can get. Just make sure you can't have commas in the data, otherwise you will need to parse it better.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "108"
} |
Q: How to map a latitude/longitude to a distorted map? I have a bunch of latitude/longitude pairs that map to known x/y coordinates on a (geographically distorted) map.
Then I have one more latitude/longitude pair. I want to plot it on the map as best is possible. How do I go about doing this?
At first I decided to create a system of linear equations for the three nearest lat/long points and compute a transformation from these, but this doesn't work well at all. Since that's a linear system, I can't use more nearby points either.
You can't assume North is up: all you have is the existing lat/long->x/y mappings.
EDIT: it's not a Mercator projection, or anything like that. It's arbitrarily distorted for readability (think subway map). I want to use only the nearest 5 to 10 mappings so that distortion on other parts of the map doesn't affect the mapping I'm trying to compute.
Further, the entire map is in a very small geographical area so there's no need to worry about the globe--flat-earth assumptions are good enough.
A: Are there any more specific details on the kind of distortion? If, for example, your latitudes and longitudes are "distorted" onto your 2D map using a Mercator projection, the conversion math is readily available.
If the map is distorted truly arbitrarily, there are lots of things you could try, but the simplest would probably be to compute a weighted average from your existing point mappings. Your weights could be the squared inverse of the x/y distance from your new point to each of your existing points.
Some pseudocode:
estimate-latitude-longitude (x, y)
numerator-latitude := 0
numerator-longitude := 0
denominator := 0
for each point,
deltaX := x - point.x
deltaY := y - point.y
distSq := deltaX * deltaX + deltaY * deltaY
weight := 1 / distSq
numerator-latitude += weight * point.latitude
numerator-longitude += weight * point.longitude
denominator += weight
return (numerator-latitude / denominator, numerator-longitude / denominator)
This code will give a relatively simple approximation. If you can be more precise about the way the projection distorts the geographical coordinates, you can probably do much better.
A: Alright. From a theoretical point of view, given that the distortion is "arbitrary", and any solution requires you to model this arbitrary distortion, you obviously can't get an "answer". However, any solution is going to involve imposing (usually implicitly) some model of the distortion that may or may not reflect the reality of the situation.
Since you seem to be most interested in models that presume some sort of local continuity of the distortion mapping, the most obvious choice is the one you've already tried: linear interpolaton between the nearest points. Going beyond that is going to require more sophisticated mathematical and numerical analysis knowledge.
You are incorrect, however, in presuming you cannot expand this to more points. You can by using a least-squared error approach. Find the linear answer that minimizes the error of the other points. This is probably the most straight-forward extension. In other words, take the 5 nearest points and try to come up with a linear approximation that minimizes the error of those points. And use that. I would try this next.
If that doesn't work, then the assumption of linearity over the area of N points is broken. At that point you'll need to upgrade to either a quadratic or cubic model. The math is going to get hectic at that point.
A: the problem is that the sphere can be distorted a number of ways, and having all those points known on the equator, lets say, wont help you map points further away.
You need better 'close' points, then you can assume these three points are on a plane with the fourth and do the interpolation --knowing that the distance of longitudes is a function, not a constant.
A: Ummm. Maybe I am missing something about the question here, but if you have long/lat info, you also have the direction of north?
It seems you need to map geodesic coordinates to a projected coordinates system. For example osgb to wgs84.
The maths involved is non-trivial, but the code comes out a only a few lines. If I had more time I'd post more but I need a shower so I will be boring and link to the wikipedia entry which is pretty good.
Note: Post shower edited.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: How to RedirectToAction in ASP.NET MVC without losing request data Using ASP.NET MVC there are situations (such as form submission) that may require a RedirectToAction.
One such situation is when you encounter validation errors after a form submission and need to redirect back to the form, but would like the URL to reflect the URL of the form, not the action page it submits to.
As I require the form to contain the originally POSTed data, for user convenience, as well as validation purposes, how can I pass the data through the RedirectToAction()? If I use the viewData parameter, my POST parameters will be changed to GET parameters.
A: The solution is to use the TempData property to store the desired Request components.
For instance:
public ActionResult Send()
{
TempData["form"] = Request.Form;
return this.RedirectToAction(a => a.Form());
}
Then in your "Form" action you can go:
public ActionResult Form()
{
/* Declare viewData etc. */
if (TempData["form"] != null)
{
/* Cast TempData["form"] to
System.Collections.Specialized.NameValueCollection
and use it */
}
return View("Form", viewData);
}
A: There is another way which avoids tempdata. The pattern I like involves creating 1 action for both the original render and re-render of the invalid form. It goes something like this:
var form = new FooForm();
if (request.UrlReferrer == request.Url)
{
// Fill form with previous request's data
}
if (Request.IsPost())
{
if (!form.IsValid)
{
ViewData["ValidationErrors"] = ...
} else {
// update model
model.something = foo.something;
// handoff to post update action
return RedirectToAction("ModelUpdated", ... etc);
}
}
// By default render 1 view until form is a valid post
ViewData["Form"] = form;
return View();
That's the pattern more or less. A little pseudoy. With this you can create 1 view to handle rendering the form, re-displaying the values (since the form will be filled with previous values), and showing error messages.
When the posting to this action, if its valid it transfers control over to another action.
I'm trying to make this pattern easy in the .net validation framework as we build out support for MVC.
A: Keep in mind that TempData stores the form collection in session. If you don't like that behavior, you can implement the new ITempDataProvider interface and use some other mechanism for storing temp data. I wouldn't do that unless you know for a fact (via measurement and profiling) that the use of Session state is hurting you.
A: If you want to pass data to the redirected action, the method you could use is:
return RedirectToAction("ModelUpdated", new {id = 1});
// The definition of the action method like public ActionResult ModelUpdated(int id);
A: Take a look at MVCContrib, you can do this:
using MvcContrib.Filters;
[ModelStateToTempData]
public class MyController : Controller {
//
...
}
A: TempData is the solution which keeps the data from action to action.
Employee employee = new Employee
{
EmpID = "121",
EmpFirstName = "Imran",
EmpLastName = "Ghani"
};
TempData["Employee"] = employee;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "123"
} |