prompt
stringlengths
2
90
completion
stringlengths
1
27.4k
Ask HN: Help me see why everyone seems to love TDD?
You don&#x27;t sell unit tests to management. You don&#x27;t even <i>tell</i> management. The same way you don&#x27;t sell using reasonable variable names. You do it because (if!) it makes you and your team more productive. Coverage metrics are internal to the development team - they are not management metrics. Shipped features, open support tickets, velocity, story points, etc. are management metrics; cyclomatic complexity, test coverage, and length of variable names are developer metrics.<p>That said, there is a fairly large difference in experience and skill required to learn how to write variable names that make you more productive and how to write unit tests that make you more productive.<p>It all comes down to TCO of the system to users. In many cases, high test coverage does reduce TCO, but this is difficult to see during the first year of the project. And note too that the risk of <i>increasing</i> TCO due to poorly written unit test code is real. You can do more harm than good by creating flaky ad-hoc unit tests that are slow to run, difficult to change, and make refactoring of production code PITA.<p>The reason we do TDD, is that has some unique benefits. Especially, if you aim to use &quot;full TDD&quot; in the sense that <i>no line production code is written before there is a failing test for that line</i>.<p>What your life would be like if every time you interrupt a programmer in your team, everything she was working on compiled and passed all its tests less than two minutes ago? And if you&#x27;d have an always up-to-date spec that is so precisely written, it executes? And deployment to production would be a business decision instead of technical one as you&#x27;d be ready to deploy practically whenever you feel like it? And if you decided to swap MSSQL for PostgreSQL in your production system, you could do it without breaking a sweat?<p>What would your life be like if you did not have to fear breaking anything when cleaning up code? If you&#x27;d be able to keep your system maintainable and you&#x27;d know it?<p>If TDD is so great, why then, it is not used more often? I can only speculate, but I think this comes back to the question of developer skills. I have mentioned some of this before here at HN, but I&#x27;ll reiterate.<p>I have noticed that I have to make large refactorings to move things around to arrange the whole system so that each part can be tested without too much effort. To do this I have to view most things in terms of the interfaces they provide. On the test side, I have to write the test code so that the <i>what the test does</i> is strictly separated from <i>how the test does it</i>. This way changing the system causes only minor changes to ripple to the majority of the test code. This is by far the most common paint point of a novice TDD&#x27;er.<p>Based on this, it seems that programming with TDD is a distinct skill-set that requires significant effort to get reasonably good at, i.e., to be more productive than without TDD. I have given it a try on medium-size projects and it does pay off in terms of simplicity of the design (I have to manage dependencies and decouple external systems and components quite heavily), low defect rates in production&#x2F;QA, way less time spent in debugger, and relatively high velocity (this is notoriously hard to measure cross projects&#x2F;teams, but at least this was true based on customer and product owner feedback).<p>However, the problem with TDD is that all of the above (tests decoupled from interface, interface decoupled from implementation, system decoupled from external systems, components decoupled from each other, design skills to recognize this, and refactoring skills to do this fast enough to remain productive) need to be done well enough at the same time. Otherwise the approach falls into pieces at some point.<p>To paraphrase Uncle Bob from some years ago: &quot;I have only been doing TDD for five years, so I am fairly new to it...&quot; Half of the programmers have that much experience in programming in general, so the amount of time required to hone TDD and refactoring skills may not be there yet.<p>TDD&#x27;ing requires months or years of practice to get really productive with, and has a fairly large set of prerequisites that one has to know in order to remain sane. It took me several years of experimenting (especially with different techniques of writing unit tests) before I found a way to be productive with TDD. I also drew the connection between testability and program architecture (aggressive decoupling) fairly recently (some four years ago), and that was one of the last pieces of the puzzle that made everything work. The system structure is especially crucial for writing fast unit tests. You really want the dependencies to external systems (DB, UI, Network, MQ, OS, Library, Framework, or the like) injected and abstracted behind and an interface.<p>If your system&#x27;s design results in your unit tests depend on volatile components, your system becomes unnecessarily complex. This is because volatile components change often and these changes ripple to your units tests effectively rendering them volatile too. Avoiding this problem has been captured, among others, by the Stable Dependencies Principle (<a href="http:&#x2F;&#x2F;c2.com&#x2F;cgi&#x2F;wiki?StableDependenciesPrinciple" rel="nofollow">http:&#x2F;&#x2F;c2.com&#x2F;cgi&#x2F;wiki?StableDependenciesPrinciple</a>), which states that the dependencies should be in the direction of the stability. A related one is the Stable Abstractions Principle (<a href="http:&#x2F;&#x2F;c2.com&#x2F;cgi&#x2F;wiki?StableAbstractionsPrinciple" rel="nofollow">http:&#x2F;&#x2F;c2.com&#x2F;cgi&#x2F;wiki?StableAbstractionsPrinciple</a>), which states that components should be as abstract as they are stable.<p>When I started with TDD, my productivity plummeted initially, but the benefits were too good, and I slowly found the techniques needed to keep up with my old self in terms of produced features. I dread to think the pieces of code that send me deep down into debugging sessions due to non-existent test coverage.<p>Part of the problem is that there are not that many TDD codebases or TDD&#x27;ers around. Also, this is probably not something you can pick up while doing toy projects or school assignments. The benefits start to show in the 100 kloc and above magnitudes, and as there are so many ways to paint yourself into a corner with bad overall design, coupling, unmaintainable (or, my pet peeve, slow) tests, chances are, you don&#x27;t figure out all the necessary things yourself. On top of that, there is no time to learn this much in most dev jobs, so you are left to learn with hobby projects (which do not usually grow big enough).<p>Most TDD experiments result in failures for the reasons listed. This is why you read so many comments on TDD being useless, wrong, a religion, or Uncle Bob cult. However, it seems that people who keep practicing TDD have been programming for more years than people who have not tried it, or have abandoned the practice. I have yet to meet a TDD practitioner who started programming that way and has not considered any alternatives. The ideas have born out of really bad - serious - experiences with existing approaches.
Ask HN: What are your favorite board games?
In my group of friends we have more than 50 games, most of them involving complex strategy (as we do like that). Here is a list of my personal top 10:<p>1) Shogun. Archetypical Risk-like game of moving soldiers and conquering provinces, but with a unique twist that makes it outstanding in my view: instead of dice, it uses a cube tower to generate randomness. The outcome of fights is based in the number of cubes from each player that come out of the tower. If you get bad luck in a battle (because your cubes stay inside) then the tower will be loaded in your favor for the next battles (those hidden cubes can come out at any moment). I love this because, although I think some randomness is good in strategic battle games to spice things up and so that the game doesn&#x27;t turn a chess-like prediction game, I don&#x27;t like the winner being dependent on luck. The tower introduces randomness, but guarantees that no one will be too lucky or unlucky, which is great. Combine with a setting in feudal Japan, complete with rice farming and starving populace revolving against you, and you get an amazing game.<p>2) Imperial 2030. Another typical Risk-like game of moving soldiers and taking countries... except that it&#x27;s not. You don&#x27;t control the empires themselves, instead you are a banker that buys each empire&#x27;s bonds. At a given point in the game, the banker that holds more bonds for a particular empire is the one controlling its politics. So maybe right now I control China, but I know that you have a lot of cash and are looking at Chinese bonds with greedy eyes, so I send the Chinese army on an unnecessarily painful military campaign to wither down its power in case you are going to control it in the next turn. This makes for awesome mechanics in a really strategic game. By the way, it doesn&#x27;t have any random elements at all, so it&#x27;s a good game if you are against that.<p>3) Galaxy Trucker. This game is great due to its sheer concept... first you use pieces from a scrapyard (competing for the pieces with the other players) to build a spaceship with its cannons, shields, cargo holds, etc. and then all of you have to fly them in a journey littered with space pirates, meteorites than can tear off pieces of your ship, merchant planets, smugglers and more. The feeling when one of your rival ships is tore in two by a meteorite is unbeatable.<p>4) Star Wars: Imperial Assault. When a friend of mine got this game, I thought &quot;they have the Star Wars franchise so it will probably be a crappy game - they will sell anyway&quot;. But no. It&#x27;s actually a very good tactics game with lots of choices, characters with very different styles, special abilities, and a set of rules that (albeit unspecified at times) go very well together.<p>5) Robo Rally. A classic from Richard Garfield, the guy that brought you MtG. OK, maybe this doesn&#x27;t fit that much into &quot;complex strategy&quot;, but it&#x27;s also a game that hackers should like because it&#x27;s about programming after all! You have to program your robot with randomly-dealt cards to try and survive pits, traps and the other robots&#x27; lasers. A huge strong point of this game is that the maps and missions are hugely customizable, supporting different sets of rules like races, capture the flag, deathmatch, and others that you can come up with. It supports up to 8 players, you can build different maps putting together map boards, and there are editors online to print your own map boards, so it&#x27;s the ultimate customizable game. I think it&#x27;s out of print but a new edition has been announced, although it only supports 6 players sadly.<p>6) Carcassone. One of the best known modern board games, together with Catan. But while Catan is IMHO too shallow and too random, featuring few meaningful decisions, in Carcassone every tile you place is a meaningful decision. The experience is very different in 2-player games (much more offensive) than in games with more players. Some expansions (the builder, the granary and pig, the mayor, the resources, etc.) really enhance the game although others are prescindible.<p>7) Discworld Ankh Morpork. A deception game: you have to work towards your goal and the other characters don&#x27;t know what it is. You don&#x27;t have to know Discworld to like it (one of my friends hasn&#x27;t read any of the books and loves it). Drawback: unbalanced, it&#x27;s easier to win with some characters than others. If you care much about that, it&#x27;s probably not your game.<p>8) Goblins Inc. Similar mechanic to Galaxy Trucker (probably inspired on it), but with goblins that build robots of doom instead of spaceships, and with direct combat. Contrary to Galaxy Trucker, it&#x27;s team-based (2v2) but it also gives you the possibility of being a traitor to your partner. Less flexible than Galaxy Trucker (this one only really works with 4 players) but loads of fun!<p>9) Power Grid. A classic game where you have to build a power network. Lots of strategy and decisions, although the beginning depends too much on player location and the endgame turns a bit too much into an arithmetic-fest counting to the last nickel IMO.<p>10) Escape from the Aliens in Outer Space. A quite original board game in that it doesn&#x27;t have a board, the board is in each players&#x27; head. Some players are humans and others are aliens, but they can&#x27;t see where each other is, except with certain clues (people making noise) and items. The humans must escape the aliens. It&#x27;s a lot of fun and it involves both abstract thought and psychology&#x2F;bluffing&#x2F;etc. The drawback is that some maps and situations can be unbalanced, especially if you play with the stock rules (a door to exit the ship can randomly work or not) a player can lose very unfairly. It should be pretty easy to customize the rules though.<p>Also go is awesome, but I don&#x27;t think it&#x27;s the kind of game you were looking for advice about (and it&#x27;s difficult to compare to the others as it&#x27;s on an entirely different category).
Product Development Cycle Fundamentals
I liked several recommendations from this article, but it seems to be missing some important practices that influence success&#x2F;failure in many product dev efforts as well. What I especially liked from the article:<p>* The Product Lead seems a good reco, and is similar to a Product Owner in other parlance.<p>* Engendering buy-in by letting everyone suggest ideas and feel heard is definitely a great technique for org management in product dev. This works doubly well if everyone has confidence that product decisions are made in a sensible, clear, fair&#x2F;unbiased way after everyone&#x27;s ideas are out there.<p>* Clear measurements of success are hugely helpful as well.<p>Things that seem to be missing:<p>* What is the <i>purpose</i> of the product? What is the true north &#x2F; guiding light problem you&#x27;re solving? This sounds squishy and it&#x27;s easy to say something ambiguous and high level &quot;we&#x27;re gonna create a social video app!&quot; or &quot;be instagram for video&quot;. But this should sound more like a problem to be solved. A &quot;why&quot; or more than a &quot;what&quot;. Ie &quot;We haven&#x27;t found a social video product we love yet, and we also think it&#x27;s a problem that social media is always persistent and not private or safe enough.&quot; Or &quot;We love our phone cameras and we want to make and share goofy videos. But we don&#x27;t wanna post such random stuff to FB, Twitter, or Instagram where it&#x27;d clog the feed and live forever.&quot; Or &quot;we wanna be able to make and share goofy videos without having them haunt us forever.&quot; Or even &quot;there is no perfect dick pic app yet, and just texting lots of dick pics really sucks.&quot; These are shitty, off-the-cuff examples, but going through the process of clearly articulating this can really help you congeal focus and serve as a guiding light as you develop. You can expect to change this guiding light over time if you learn that, in actuality, not many users see the same problem or feel the same pain you do... but that is also very good to clearly know as early as possible! Stating your problem &#x2F; what you&#x27;re chasing down in clear terms helps you figure out if there is anything there worth solving sooner than later, and that is vital in the early days of product dev. I wonder that the beginning of this article talks about tactics like product dev cycle length vs this higher level purpose.<p>* Where does analysis of the market&#x2F;strategic landscape fit in? This is another crucial element and it can help inform your &quot;plan of attack&quot; in terms of what to prioritize day to day or week to week. I think that SocialCam may have done better if they&#x27;d taken this more strategic approach to the landscape on mobile especially. For example, they may not have chosen to rely so heavily on Facebook early on. Or, they may have decided to explicitly target younger users, realizing that FB, Twitter and even Instagram left a lot of room there.<p>* Clear measures of success are discussed, but how do those relate to core product KPIs? In particular, it is vital to measure retention cleanly and effectively, and to measure engagement, and viral&#x2F;k-factor&#x2F;wom installs as best as possible. Zealously improving these metrics every week is critical in the early days, and improving these metrics should be a primary activity in feature experimentation (see below). Moreover, you should be looking for step-change improvements early on, not little incremental gains, and you should keep hunting until you find step-changes. You&#x27;re waiting for some feature or use flow in your product to catch wind and drive a cycle of engagement, more frequent revisiting and word-of-mouth&#x2F;viral recommendation. Gotta measure these and these are really the only &quot;measures of success&quot; early on.<p>* Where are structured feature experiments? Especially when you&#x27;re hunting product&#x2F;market fit (as SocialCam was early on), it&#x27;s essential that you have theses on what will &quot;catch fire&quot; with users and prepare the best experiments to test them that you can. Here again, SocialCam may have more quickly iterated toward something like Snapchat if they&#x27;d had theses or testable ideas. In early product dev cycles while hunting true product&#x2F;market fit and strong engagement+retention+virality, fully ~80% of product dev resources can be allocated to feature experiments.. and pretty much all features should be treated <i>as</i> experimental until fit is found. For a product like this (a game or a social product) the constant, daily refrain should be &quot;is it fun yet?&quot; &quot;Is it really, truly <i>fun</i> yet?&quot; &quot;Do you just enjoy screwing around with it?&quot; &quot;Do you feel compelled to use it when you go too long without it?&quot; For a product like this, keep testing out functionality until it&#x27;s <i>fun</i>. Make that the singular, maniacal focus early on until it <i>is</i> fun and you&#x27;ve caught fire with at least some demo&#x2F;psychographic.<p>* Where is a frequent customer feedback process? The article mentions &quot;trying&quot; to do <i>monthly</i> in-person user feedback sessions... but for free consumer-facing apps, especially in their very early&#x2F;conceptual phases, it&#x27;s much better to pull-in users every few days, if not every single day. That user feedback is your lifeblood early on and a feature isn&#x27;t worth fully testing and polishing if it doesn&#x27;t seem like it&#x27;s gonna catch fire or move you closer to fun.<p>* For an early product dev set up, I think continuous int and daily or semi-daily functional builds you can test with customers&#x2F;users are important (partially so that you can get rapid, regular feedback from users). The article says they iterated &quot;extremely quickly&quot;, which I&#x27;m sure is true relative to their previous process... but a 2-week fixed cycle in the early days of a product&#x27;s (especially a consumer-facing product vs a b2b product) dev and exploration of p&#x2F;m fit is very, very slow. That&#x27;s only 26 turns at bat per year, which is too slow when it&#x27;s early on. More importantly&#x2F;starkly, it&#x27;s only 2 times at bat per month early on... that really makes it hard to truly rapidly iterate.<p>* Curious whether the author ever tried dual-track development. There are the rapid-fire, ideally daily builds and experiments going on on one track (the discovery track, in this case of early consumer-facing product dev) and there can be a longer-cycle track for things like underlying infrastructure improvement, UX improvements, bug fixes, and other incremental improvements. (This is different than the way dual-track pd would be applied in other contexts... this is a way to dual-track in the pupative stage of a consumer-facing social app.) Splitting effort in this way can be extremely helpful, especially after you have some initial traction.<p>In any case, this is a useful article and it contains some good advice. I think it could be supplemented with some addition practices that help a lot in this sort of context as well.
Minds Turned to Ash: Burnout is more than working too hard
<i>To anyone suffering from ACTUAL long-term burnout</i>, in the sense that a &quot;Mind turned to ash&quot; doesn&#x27;t sound like hyperbole <i>at all</i>, and trying to recover:<p>Do NOT bother with this article. Save yourself the emotional drain (maybe go outside for a bit ;) ) It contains NO actionable advice, just a few anonymised stories, some academic and pop-psych ideas. Also conspicuously absent is any mention of these patients&#x27; recovery or future.<p>Maybe this article does serve as a nice warning for those heading towards burnout. I wrote the above, to help others already suffering. As a warning it&#x27;s way too long for the crowd that is too busy working too many hours. Now trust me you can get it from working as little as 16h&#x2F;wk! If that sounds like you--best of luck.<p>Yes the stories are so recognizable. Especially when he contrasted the first story with the second because the first one hit most of the main factors and then got convoluted with a whole bunch of other psychodevelopmental factors that the second story demonstrated have nothing to do with burnout.<p>The article could have been about 1&#x2F;3rd it&#x27;s length and I&#x27;m a bit bummed about this. Normally I wouldn&#x27;t complain about that but this guy is a psychiatrist and I truly wonder what he was trying to achieve with this article, if it is anything besides self-promotion he would have written a disclaimer like above as a summary to help sufferers.<p>Myself, burnt out in 2007. Never got better. I don&#x27;t actually want to read anonymized burnout-stories. It&#x27;s an emotional drain with no reward (I&#x27;ll page through a few in this thread, because this burnt-out husk of my former self is compassionate). I read it on the off-chance that maybe it&#x27;d contain some info or idea that I haven&#x27;t tried in the past 9 years. So, that was my mental energy for the day.<p>I don&#x27;t have answers either, but maybe some ehrm, burnout pro-tips. The web is full of stories to serve as warnings before burnout, but hardly anything for after, especially for sufferers where a few months rest just does not cut it. So here&#x27;s a few things, maybe it&#x27;ll help someone:<p>- Meditation is <i>really</i> nice. Maybe even more important was finding a group to do it with[0], it adds a dimension, mine are the kindest people I know in the world, and it helps motivate to do it daily . For mindfulness meditation, I found I need at least 2-3 sessions&#x2F;week, for the exercise to carry on outside the actual moments of practising it. Apart from sitting, occasionally try a walking meditation (look it up) just like with physical exercise, changing up the practice helps if&#x2F;when you get stuck. And don&#x27;t forget there&#x27;s more kinds besides mindfulness. One that is quite orthogonal to mindfulness is compassion meditation or &quot;metta&quot; meditation[1].<p>- Physical exercise is also really nice. I like running. I only found out I&#x27;m actually good at sports during the first half year of acute symptoms when I literally couldn&#x27;t use a computer for more than 10 minutes. I used to do strength exercises too (various pullups mainly), liked the results but the exercises carried that scary burning feel in it. Maybe you&#x27;re different, try it because I hear that stuff is really healthy :) Figure out what&#x27;s nice for you.<p>- Something creative that gets you (even just sometimes) into flow. I like drawing and singing. If you play a live instrument that&#x27;s probably good too. Unfortunately for me, computer programming doesn&#x27;t quite fit this category any more. Not even democoding :&#x27;-( I still do it anyway because I really really want to and my creativity wants to get out and some things you can only express in code.<p>- Keeping a positive attitude is probably the hardest but it&#x27;s also nice. I&#x27;m not sure if the start of this post is a good example but that came from the heart and I hope the tips make up for it :)<p>These are nice things to do when you can&#x27;t do anything else.<p>I also do volunteer work and teaching kids my otherwise-wasted years of studying computer science, but I don&#x27;t know if that is for everyone. Other people I know really enjoy working with elderly people, providing company and hearing their stories etc. they love it but don&#x27;t understand why I like working with kids and vice versa. To each their own :)<p>[0] try <a href="http:&#x2F;&#x2F;wkup.org" rel="nofollow">http:&#x2F;&#x2F;wkup.org</a> or <a href="http:&#x2F;&#x2F;reddit.com&#x2F;r&#x2F;meditation" rel="nofollow">http:&#x2F;&#x2F;reddit.com&#x2F;r&#x2F;meditation</a> (the sidebar) to find a meditation group in your area (sometimes called &quot;sangha&quot;)<p>[1] if you always practiced both you might think &quot;what, no, they&#x27;re two sides of the same coin&quot; and you&#x27;d be right--for <i>you</i>. because if you&#x27;ve only ever practiced mindfulness, you&#x27;re in for a treat! you&#x27;re one of today&#x27;s lucky 10,000 ;-) (<a href="https:&#x2F;&#x2F;xkcd.com&#x2F;1053&#x2F;" rel="nofollow">https:&#x2F;&#x2F;xkcd.com&#x2F;1053&#x2F;</a>)<p>[2] I like to end on a positive note so I&#x27;m hiding this bit in the unreferenced footnote. <i>Ahem</i>. Seriously, <i>fuck</i> this author. The link below the linked article from the same author, &quot;The way out of burnout&quot; would hopefully get flagged for misleading title on HN. I had to page through it because I can&#x27;t read more of this drivel in detail, but <i>again</i> no actionable advice; &quot;less severe cases of burnout&quot; are advised to take action to avoid actual burnout (great) and actual cases of burnout are advised that this no longer works for them (whoa, my mind is blown, like the ashes in the Lebowski movie)
Ask HN: What book have you given as a gift?
Interesting question, and a quite difficult one for me to answer as I&#x27;m refactoring much of my thinking presently. I&#x27;ll offer a list, some authors, and some guidelines, largely based on books which radically changed my thinking.<p>Madelaine l&#x27;Engle&#x27;s <i>A Wrinkle in Time</i>. A quite profound children&#x27;s book with lifelong impacts.<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;wrinkle-in-time&#x2F;oclc&#x2F;22421788" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;wrinkle-in-time&#x2F;oclc&#x2F;22421788</a><p>Frank Herbert&#x27;s <i>Dune</i> introduced true complexity into storytelling for me.<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;dune-frank-herbert&#x2F;oclc&#x2F;52908888" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;dune-frank-herbert&#x2F;oclc&#x2F;52908...</a><p>James Burke&#x27;s books <i>Connections</i> and <i>The Day the Universe Changed</i>, and their accompanying television series, were a profound introduction to the history of technology, science, ideas, and philosophy. Though 30+ years old, they remain highly current and relevant.<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;connections&#x2F;oclc&#x2F;4494136" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;connections&#x2F;oclc&#x2F;4494136</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;day-the-universe-changed&#x2F;oclc&#x2F;12049817" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;day-the-universe-changed&#x2F;oclc...</a><p>Jeremy Campbell&#x27;s <i>Grammatical Man</i> (1984) introduced the concepts of information theory and their deep, deep, deep interconnections to a tremendous number of interconnected systems, many not explored within his book. Darwin&#x27;s <i>The Origin of Species</i>, James Gleick&#x27;s <i>Chaos</i>, and many of the works of Santa Fe Institute members, including John C. Holland, J. Doyne Farmer, Geoffrey West, W. Brian Arthur, David Krakauer, and Sander van der Leeuw, continue these themes.<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;grammatical-man-information-entropy-language-and-life&#x2F;oclc&#x2F;8306673" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;grammatical-man-information-e...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;chaos-making-a-new-science&#x2F;oclc&#x2F;15366709" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;chaos-making-a-new-science&#x2F;oc...</a><p>William Ophuls&#x27; <i>Ecology and the Politics of Scarcity</i> (1977) is perhaps the best, most comprehensive, shortest, and most readable exposition of the fact, reality, dynamics, and interactions of limits on the present phase of fossil-fuel fed economic growth I&#x27;ve found. This is a book I recommend not only for the message, but the author&#x27;s clarity of thought and exposition, his meticulous research, exquisite bibliographical notes, and, given the nearly 30 years elapsed, testability numerous of his predictions, some failed, yes, others uncannily accurate. Rather more the latter. In a similar vein, William R. Catton&#x27;s <i>Overshoot</i> looks at the ecological dynamics in more depth, with much wisdom, the writings of Richard Heinberg cover the ground of limits fairly accessibly and more recently. Vaclav Smil in numerous books addresses technical factors of the profound nature of the past 250 years, and implications for the future. Meadows, et al, in <i>Limits to Growth</i> set off much of the post-1970 discussion (though they&#x27;re hardly the first to raise the question -- it dates to Seneca the Elder),<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;ecology-and-the-politics-of-scarcity-prologue-to-a-political-theory-of-the-steady-state&#x2F;oclc&#x2F;2524932" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;ecology-and-the-politics-of-s...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;overshoot-the-ecological-basis-of-revolutionary-change&#x2F;oclc&#x2F;6195764" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;overshoot-the-ecological-basi...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;search?q=au:heinberg,+richard&amp;qt=owc_search" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;search?q=au:heinberg,+richard&amp;q...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;search?q=au:smil,+vaclav&amp;qt=results_page" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;search?q=au:smil,+vaclav&amp;qt=res...</a><p>Though hardly pessimistic, Daniel Yergin&#x27;s book <i>The Prize</i> (and TV series) impressed upon me more than any other just <i>how much</i> petroleum specifically changed and transformed the modern world. Though intended largely as laudetory and championing the oil industry by the author, my read of it was exceptionally cautionary. The impacts on business, everyday life, politics, wars, industry, and transport, and the rate at which they occurred, are simply staggering. You can continue this exploration in Vaclav Smil&#x27;s <i>Energy in World History</i> (1994) (I&#x27;ve recommended Smil independently elsewhere), and a rare but profound two-volume set I&#x27;m currently reading, Manfred Weissenbacher&#x27;s <i>Sources of Power: How energy forges human history</i> (2009). The shear physicality of this book speaks to the message -- it&#x27;s divided into five parts: 1) Foraging Age (6 pages), 2) Agricultural Age (156 pp), 3) Coal Age (160 pp), 4) Oil Age (296 pp), and 5) Beyond the Oil Age (142 pp). That is, the ~2 million years of pre-agricultural existence are little more than a footnote, the 8,000 years of agriculture roughly equal to the 150 years of coal, and the 100 years of petroleum use roughly twice either. The oil and post-oil ages comprise their own volume. Yergin followed up with <i>The Quest</i>, continuing the search for oil, though I&#x27;ve been less impressed by it.<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;prize-the-epic-quest-for-oil-money-and-power&#x2F;oclc&#x2F;22381448" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;prize-the-epic-quest-for-oil-...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;energy-in-world-history&#x2F;oclc&#x2F;30398523" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;energy-in-world-history&#x2F;oclc&#x2F;...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;before-oil-the-ages-of-foraging-agriculture-and-coal&#x2F;oclc&#x2F;837625798" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;before-oil-the-ages-of-foragi...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;oil-age-and-beyond&#x2F;oclc&#x2F;837625970" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;oil-age-and-beyond&#x2F;oclc&#x2F;83762...</a><p>Adam Smith&#x27;s <i>An Inquiry into the Nature and Causes of the Wealth of Nations</i> is among the most-cited (and most <i>incorrectly</i> cited), least-read books of high influence I&#x27;m aware of, outside religious texts (and perhaps it <i>is</i> a religious text to some…). The author&#x27;s message has been exceptionally shaped and manipulated by a powerful set of forces, quite often utterly misrepresenting Smith&#x27;s original intent. Reading him in his own words, yourself, is strongly recommended. I&#x27;d also recommend scholarship particularly by Emma Rothschild and Gavin Kennedy, though also others, on Smith. Contrast with the portrayal by the propaganda disinformation front of the Mont Pelerin Society &#x2F; Atlas Network &#x2F; so-called Foundation for Economic Education, and much of the modern American Libertarian movement (von Mises, Hayek, Friedman, Hazlett, Rothbard, and more recently, Norberg). Contrast <i>The Invisible Hand</i> (1964), a compilation of essays published by Libertarian house Regnery Press in 1966, at the beginning of the rise in public use of Smith&#x27;s metaphor to indictate <i>mechanism</i> rather than <i>an expression of the unknown</i>.<p>There are numerous editions of Smith, I believe the Glasgow is frequently cited by Smith scholars: <a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;glasgow-edition-of-the-works-and-correspondence-of-adam-smith-2-an-inquiry-into-the-nature-and-causes-of-the-wealth-of-nations-vol-1&#x2F;oclc&#x2F;832488566" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;glasgow-edition-of-the-works-...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;economic-sentiments-adam-smith-condorcet-and-the-enlightenment&#x2F;oclc&#x2F;45282974" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;economic-sentiments-adam-smit...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;adam-smith-and-the-invisible-hand&#x2F;oclc&#x2F;820387997" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;adam-smith-and-the-invisible-...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;adam-smiths-lost-legacy&#x2F;oclc&#x2F;56598640" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;adam-smiths-lost-legacy&#x2F;oclc&#x2F;...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;invisible-hand-a-collection-of-essays-on-the-economic-philosophy-of-free-enterprise&#x2F;oclc&#x2F;326622" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;invisible-hand-a-collection-o...</a><p>I&#x27;d like to put in recommendations on technology specifically, but am still searching for a good general text. The material&#x27;s covered somewhat in the chaos and complexity recommendations above (Campbell et al), though I&#x27;d add Joseph Tainter&#x27;s <i>The Collapse of Complex Societies</i>. Charle&#x27;s Perrow has several excellent books including <i>Normal Accidents</i> and <i>Organizing America</i>. I&#x27;d like to reference something concerning Unix, Linux, and programming, perhaps Kernighan and Pike&#x27;s <i>The Unix Programming Environment</i>, Linus Torvalds&#x27; <i>Just for Fun</i>, Richard Stallman&#x27;s <i>The GNU Manifesto</i>, and Steve McConnel&#x27;s <i>Code Complete</i>. The O&#x27;Reilly book <i>Unix Power Tools</i> also encapsulates much the strength of the Unix toolset. All these are somewhat dated.<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;collapse-of-complex-societies&#x2F;oclc&#x2F;15083222" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;collapse-of-complex-societies...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;normal-accidents-living-with-high-risk-technologies&#x2F;oclc&#x2F;10229932" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;normal-accidents-living-with-...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;organizing-america-wealth-power-and-the-origins-of-corporate-capitalism&#x2F;oclc&#x2F;939707157" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;organizing-america-wealth-pow...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;unix-programming-environment&#x2F;oclc&#x2F;10269821" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;unix-programming-environment&#x2F;...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;unix-power-tools&#x2F;oclc&#x2F;52381684" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;unix-power-tools&#x2F;oclc&#x2F;5238168...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;free-software-free-society-selected-essays-of-richard-m-stallman&#x2F;oclc&#x2F;51101440" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;free-software-free-society-se...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;just-for-fun-the-story-of-an-accidental-revolutionary&#x2F;oclc&#x2F;45610395" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;just-for-fun-the-story-of-an-...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;code-complete-a-practical-handbook-of-software-construction&#x2F;oclc&#x2F;27035508" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;code-complete-a-practical-han...</a>
Why I'm not a big fan of Scrum
Alright, I know this has already dropped off the front page, but I figured I had something to offer to the conversation...<p>First, as background, I&#x27;m a software engineer, not a &quot;scrum coach&quot; or anything, but I&#x27;ve been on a Scrum team for <i>nine years and nine months.</i> (I know, right? Our first Scrum project was Nov 2006. The set of people has fluctuated over the years but it&#x27;s still a pretty tight ecosystem.) Just this morning we were requested to make videos about our team(s) to explain why we work so well, so this is pretty apropos.<p>Second, I really did read the article through and thought it was well thought-out. Notably, I felt they came from a different mindset than mine, a different workplace, and I <i>am</i> a fan of Scrum, so here are my feedback points--I hope they&#x27;re considered positive, helpful and constructive:<p><i>TL;DR</i>: I feel like the author is not empowered in his workplace. Time to upgrade your scrum team mindset.<p>* Remember that a scrum team is self-organizing and self-managing. Specifically, I&#x27;m replying to: &quot;The daily standup is in my opinion a manifestation of a significant but unspoken component of Scrum: Control&quot; implies you don&#x27;t know or don&#x27;t practice this and is likely the actual source of your problems with Scrum.<p>* You mention disliking pointing stories because they never match up with the points on tasks. Points on stories are done because you haven&#x27;t created tasks yet, so you can&#x27;t use task hours to add up to a story, and you need to figure out relative complexity before you start.<p>* Getting points (RE: &quot;I don&#x27;t get the exact response that was expected, so no points for you.&quot;) - Here, the team decides whether it gets points, not a user in a review. That sounds weird, but so does a user saying &quot;No points for you! (soup nazi voice)&quot; in a review. Stick with me here: If you get feedback that says you need to do significant rework, it&#x27;s clear there was a misunderstanding between your product owner and the business. Make a new backlog item, point it, prioritize it, move on. (If this happens more than once consider how far apart your vision is from your user and their vision of the project!)<p>* Demoability as a requirement in Scrum (specifically responding to &quot;How can you demo that your code base has become more habitable?&quot;) - Business folk understand the value of &quot;plumbing&quot; (pipes in your house aren&#x27;t visible, but they sure are handy when you want to take a poo). They don&#x27;t like showing up to meetings to talk about plumbing, though, so either skip that meeting entirely or tell them what will be possible when said plumbing is complete. Point is, don&#x27;t die on the hill of demoability just to say &quot;See! Scrum sucks! I can&#x27;t demo all this plumbing!&quot;<p>* Meetings every two weeks - If they&#x27;re painful, you&#x27;re doing something wrong. If it&#x27;s not ready, it doesn&#x27;t go in the demo. Don&#x27;t kill yourself for a demo. It may be a &quot;sprint&quot; but you&#x27;re still <i>actually</i> in a marathon, so just pace yourself well.<p>* You mentioned disconnected users. If you have bored users, get better about grouping the meeting. Split it into two meetings, if need be. We don&#x27;t; we say &quot;Hey, <i>finance folk</i>, you&#x27;ll want to pay attention the first 15 minutes of the meeting then you can go. You&#x27;re welcome to stay, but you&#x27;ll see app features that won&#x27;t affect you&quot;<p>* Daily Standup - For high-performing teams, the daily stand up is training wheels. Anything you learn in a daily standup should make you mad. (&quot;Why did you wait til the daily stand up to tell me you [finished X|were blocked|needed this info|are ready for me to test]!&quot;). Do them if you need to. If you do need them, ask yourself why. e.g. who isn&#x27;t a communicator? Who would&#x27;ve blown you off were they not in this required meeting? Those guys are your blockade for more than just a daily scrum meeting.<p>* Sprint durations - You make it sound like you don&#x27;t have a choice. Your team chooses the duration of the sprint.<p>* &quot;I find the idea that you should get somewhere by sprinting repeatedly [and the rigidness of items in a sprint] rather weird.&quot; The rigidness is there to protect you from outside forces, not prevent your team from getting the job done. e.g. It&#x27;s there to prevent the VP of Marketing from showing up and saying (psst, hey, could you add a blue link on the homepage?&quot; .. &quot;Oh, that link is wrong, make it into a popout.&quot; ... &quot;oh, that popout should be a flash video&quot; ... then suddenly you&#x27;re missing your deadlines to the VP of Finance. It&#x27;s there to give you a defense mechanism against folks above you trying to work-around their peers for your valuable time.<p>* &quot;The Scrum coach will find fifty ways of attacking each and every one of these topics, but all of them will be in the form of one more thing. One more meeting, one more document, one more backlog, one more item in the definition of done. &quot; I don&#x27;t think so, man. It&#x27;s all about taking the training wheels off, not adding brakes. Maybe you have people working against you. Working against being in a team. In order to protect you from them you&#x27;re all being laden with extra BS. For us, every painful thing that slowed us down was cut. Blocades were fired. The world got good.<p>* &quot;Every story in scrum has to end in customer value. [...] why even bother with refactoring?&quot; The team can be a customer. &quot;As a software engineer, I must refactor my [blah] to facilitate [blah].&quot; No need to jump through artificial end-user-centric hoops. When&#x2F;if you mention it outside the team, just hand-wave them and call it &quot;plumbing.&quot; Everyone understands plumbing (as in, pipes in your house you can&#x27;t see but appreciate every time you flush.) You still get points for work done, you still get value, you&#x27;re still being a responsible engineer keeping their house clean.<p>* &quot;What is the job of a software developer? Writing code? I don&#x27;t think so. I think it&#x27;s inventing and customizing machine-executable abstractions&quot; Not to be trite, but I&#x27;d say the job of a software engineer is to offer solutions to business problems. Usually this is by way of &quot;I&#x27;ve got a hammer so let me hit that hangnail for you&quot;, but really, that&#x27;s all we are, is problem solvers... but... <i>shrug</i><p>Re: Ideas for Alternatives:<p>* &quot;One way to achieve this might be putting work items through what I would call an algebra of complexity, i.e. an analysis of the sources of complexity in a work item and how they combine to create delays. The team could then study the backlog to locate the compositions that cause the most work and stress, and solve these knots to improve the codebase. The backlog would then resemble a network of equations, instead of a list of items, where solving one equation would simplify the others by replacing unknowns with more precise values.&quot; I&#x27;d love to see some practical examples of this. It sounds more complicated than the simple off-the-hip-shot estimates we get with points, but the idea has promise.<p>* &quot;The other proposal I would have is to get rid of the review, planning and stand-up meetings.&quot; You should be doing these judiciously anyway, not dogmatically. Free yourself of the chains of dogma and just do these when they have value. The thing is that they ARE good training wheels. If you&#x27;re not in a high-performing team and you skip straight to &quot;just do it when you need it&quot; then you never get into practice, you never get used to them, you ... never do them. They do have value, and you should do them when you can demonstrate value in them. So... Ascend when you&#x27;re ready to.
Ask HN: Why did you stop learning to code?
TLDR; Follow your dreams, follow your passions, and never give up. Keep having ideas and making them a reality! The moment you give up is the moment your projects will fail. Each day, you have a chance to seize your opportunity.<p>Longer story made shorter:<p>I never stopped learning though I stopped learning code for a long time. In fact, I had taught myself programming at 12 years old. I had tried and tried and I just didn&#x27;t get it. It was Visual Basic 3.0. I kept opening it up and trying new things, but always was unsuccessful. One night, I had a dream about programming and designing applications. I woke up and wrote my first program. It was just an eight ball that chose a random response when a question was asked, but it certainly led me to write other programs throughout my teenage years, mostly to interact with America Online.<p>At 18, I lost interest completely in programming and stopped. Almost a decade later... from going to college to living in another country, I came home, broke, with a college degree in psychology that wasn&#x27;t getting me anywhere, and I searched Craigslist, applying across the board to every single listing that seemed suitable. I applied even in areas that I knew I had no chance, or thought I had no chance, and one of those areas was a programming job.<p>I never thought I&#x27;d get a response, but this software company called me and tested me on my knowledge. While I didn&#x27;t get 100%, I got a pretty good score and they hired me on spot. The job required knowledge of Visual Basic 6.0. Never thought that would ever come in handy, but it did. After a few months of training, I was back in the game.<p>Long story short... I worked for a tyrant boss that paid me far less than what a programmer should have been making, and I ended up looking for other jobs. Instead of searching for another programming job, I began my path into web design and development. I honestly thank that boss for teaching me everything I know. I suffered a lot with him because he was so arrogant and loved to talk down to me, but if there was any great lesson I learned from him that I still use today, it is that before you can code, you must understand what you are coding.<p>That means: WRITE OUT EVERYTHING BEFORE YOU DO IT. EVERY PROCESS, EVERY OPERATION, AND THE GOAL OF EVERY PROCESS. If you lose the vision of your program, than you have no program. Keep the focus and you will always be successful in whatever code you write.<p>I had already begun building websites, either for free or cheap so I decided to apply for other jobs about a year and a half after getting that first programming job.. I saw two jobs at the same time on Craigslist, and I ended up going for an interview for both.. the latter one didn&#x27;t hire me right away, but eventually called me back, stating that they interviewed over a dozen people and I was the only one qualified enough for major consideration, so I ended up getting them both. Both were web design jobs, but one required 8 AM to 5 PM while the other required 6 PM to 2 AM. I was also freelancing on the side and had several paying clients. So I was pretty much working until I passed out. I slept very little and never took a day off. Having student loan debt, I ended up paying off my $40k debt in under 3 years. But the experience I got in those 2-3 years was equivalent to give me 5+ years experience as a web designer and developer.<p>One of the companies that hired me specialized in designing weather modules that displayed energy data on kiosks for solar panels in corporate buildings. The other one was a media company and paid me to design custom news web pages for big corporations like Goldman Sachs, Walmart, Kelly Blue Book, Avon, TripAdvisor, and many well-known pharmaceutical companies.<p>After over a year and a half of working non-stop, the solar energy web design company laid me off. I was upset and it took me a few weeks to get used to the fact that I no longer would be working at this job. Lucky for me, they had offered me a severance package: 3 weeks paid vacation on the condition that I would not file for unemployment. Little did they know, I had the second job, so I couldn&#x27;t file for unemployment. So I ended up getting paid for 3 weeks on top of getting paid for my other job. That company ended up laying off everyone 6-9 months later and couldn&#x27;t even afford to pay their employees, my former co-workers anymore, so they all had to sue to get their paychecks. It was a blessing in disguise to be the first to get laid off.<p>Anyways, I still worked for the night company from 6 PM to 2 AM and had my freelancing business. They would eventually give me a choice: move across the country or get laid off. I ended up moving and currently still work for them.<p>As far as what I do now: I picked up some big clients as a freelancer in the area who are making good money, so they pay me well to maintain their websites. I also run a few of my own websites that managed to get popular, so ad revenue kicked in and at least helps pay for the server and a few equipment items, such as my laptops when they break.<p>My most popular website is <a href="http:&#x2F;&#x2F;www.confessionsoftheprofessions.com" rel="nofollow">http:&#x2F;&#x2F;www.confessionsoftheprofessions.com</a> which primarily focuses on what people do for work; mainly understanding jobs, careers, and the workplace. This attracts people from all walks of life and helps me network with hundreds of people and companies every year and I get to learn a lot of new information before it is ever released, as the website is sometimes a company&#x27;s primary source of information distribution.<p>I also learned PHP and MySQL databases so aside from my day job, at night, I build web apps. Although I&#x27;m just getting into it, I&#x27;m hoping that it will bring in some recurring revenue. I cannot reveal all the details of these web apps at this time, but lets just say: I look around the Internet and if I see something can be improved or offered at a better price, I build it and become the competition.<p>One of these web apps is <a href="https:&#x2F;&#x2F;mypost.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;mypost.io&#x2F;</a> which allows you to create beautiful simple web pages in minutes with just a few clicks. No registration, no account, no hassle. In addition to this, Google Analytics is not installed to try and help users remain completely anonymous. This has led to an increase of visitors particularly in Russia and China with the United States just behind.<p>I am always in non-stop learning mode and certainly would love to find the time to learn Ruby. For my job, aside from building custom news webpages, I also try to predict and develop new web templates for what the Internet will look like in the years to come, particularly how people might read their news. I love my job.. but if you asked me a decade ago if I would be where I am today, I would have probably believed it was never possible.<p>I will just finish with this: code is poetry and it is all about understanding what people want in today&#x27;s fast-paced ever-changing tech world. Sure, you have Facebook, Twitter, Google, and other large networks and you might think to yourself: What can I possibly make that hasn&#x27;t already been made? The advantage we coders have over the big companies is that we can specialize in the small things and give the user a much more personal experience than the big guys can. It is something they have lost over the years and that becomes our advantage of developing new web apps. Never stop learning and find your audience or your customers.<p>As a software engineer, coder, programmer, web designer, web developer, and all the other names we have... there is no excuse to be unemployed, out of work, or bored. You have work to do. Get started.
Ask HN: What keeps you from exercising?
I think most people do not exercise because it&#x27;s truly not a very rational thing to do in most contexts. Exercise in the modern world, when not part of your core occupation, is a bit of a luxury - it means you have the time, the energy, the money, and the health to do it. Many people don&#x27;t. People who have more balanced lives tend to exercise more and I think the causality is in the opposite direction than most people think.<p>You can&#x27;t &quot;make time&quot;. You can only prioritize something else lower and replace it with exercise. This doesn&#x27;t mean you ever had time. This is literally what not having enough time means. Most people don&#x27;t have a nicely allotted slot for exercise in their lives. Taking it up may mean losing something rather substantial, even if it doesn&#x27;t &quot;seem&quot; substantial. If we wanted people to exercise more, the first thing we would do is absolutely give them more time.<p>When I was in college, I played lacrosse. It would seem that this was very important but it was not. It resulted in falling grades for me and heightened depression due to falling grades. I dropped lacrosse because my grades were far more important. The reported mental benefits of exercise, however significant they may be, will not override the benefits of spending a lot of extra time on studying and then resting from studying. In the long run, improved grades facilitated me getting to a position where I could then afford the luxury of exercising. No amount of exercise could have done the same. Exercise is always, always secondary to being able to make a proper living and staying sane (unless exercise IS your way of staying sane, but it is not for many), and those things may very well eat up all your time. Before exercise can improve things, there needs to be something to improve.<p>I think this is a choice people make subconsciously, repeatedly. Exercise sounds like this thing you should always do because people perceive it as not having a cost, but it absolutely has a cost, and it&#x27;s not very cost effective compared to many, many things. The people you see who are doing really badly in the physical department are probably hurting on time even more than you are and the solution is not for them to start exercising but for them to have a more balanced schedule so it&#x27;s not so terribly difficult to fit exercise in it.<p>OK, there was my monologue. Now I&#x27;ll move on to how and why I exercise.<p>I mostly exercise for a somewhat medical reason: I have a very fast resting heart rate if I don&#x27;t do some exercise once in a while, combined with hereditary tachycardias. I would rather exercise than take questionable medication, and having a high resting heart rate is not very pleasant in general.<p>How I do it is pretty simple:<p>1. I&#x27;ve made peace with the fact that I&#x27;m not going to go very far in the exercise department and that I&#x27;m doing it primarily for the basic benefits. I&#x27;m OK with not being great at it, occasionally not doing it, not doing that much of it, not growing too much, etc., because it&#x27;s still a lot better than not doing it at all, and it gets the job done with regards to heart rate. This took a lot of pressure off because I used to have a lot of issues over not doing anywhere near as well as everyone around me was.<p>2. Given 1, I stopped worrying about developing some amazing regimen and decided to just look for a lazy, efficient way to fit exercise into my schedule. I chose to join a kickboxing gym for a wide selection of reasons:<p>- I don&#x27;t have to research or manage it. I just show up to the class and the instructor takes care of things;<p>- it&#x27;s in a closed gym. This is important because having to worry about weather used to really mess up my running schedule, due to high heat or thunderstorms;<p>- it&#x27;s scheduled, so I either have to go do it or not do it. I can&#x27;t &quot;do it later&quot;. Combined with it being expensive, I don&#x27;t want to not go do this thing I spent a lot of money on;<p>- while scheduled, it&#x27;s still flexible enough that I can somewhat adjust to wonkiness in my schedule, but not too much;<p>- it&#x27;s very efficient for time spent compared to most other forms of exercise I&#x27;ve seen;<p>- it&#x27;s relatively well-rounded so I&#x27;m not doing either just cardio or just weights;<p>- it&#x27;s social so there&#x27;s the added effect of performing a bit better because other people are around, the coach is watching you, etc.;<p>- it scales off of everything so it&#x27;s trivial to branch out if I want to add more cardio, lifting, etc., to it;<p>- kickboxing gyms are not rare so if I move somewhere I&#x27;d probably still be able to find one.<p>Cons: it&#x27;s expensive. I can afford it, a lot of people can&#x27;t.
Why didn't Larrabee fail?
O.K. fucking major rant below. Disclaimer I worked for an vendor selling large quantities pretty much anything in the HPC space at the time KNC came to market and I now work for a different (larger) HPC vendor.<p><pre><code> When I say &quot;Larrabee&quot; I mean all of Knights, all of MIC, all of Xeon Phi, all of the &quot;Isle&quot; cards - they&#x27;re all exactly the same chip and the same people and the same software effort. Marketing seemed to dream up a new codeword every week, but there was only ever three chips: Knights Ferry &#x2F; Aubrey Isle &#x2F; LRB1 - mostly a prototype, had some performance gotchas, but did work, and shipped to partners. Knights Corner &#x2F; Xeon Phi &#x2F; LRB2 - the thing we actually shipped in bulk. Knights Landing - the new version that is shipping any day now (mid 2016). </code></pre> I can kind of see what the author is getting at here but it&#x27;s worth reminding general readers KNL is a wildly different beast from KNF or KNC.<p>1. It&#x27;s self hosting, it runs the OS itself&#x2F; there is no &quot;host CPU&quot;.<p>- Yes KNC ran it&#x27;s own OS but it was in the form of a PCI-E card which still had to be put into a server of some kind.<p>- Yes KNL is slated to have PCI=E card style variants but the self hosting variants are what will be appearing first and I honestly don&#x27;t think the PCI-E variants will gain much if any market traction.<p>2. It features MCDRAM which provides a massive increase in total memory bandwidth within the chip. Arguably this is necessary with such large core counts.<p>3. Some KNL variants will have Intel&#x27;s Omni-Path interconnect directly integrated which should further drive down latency in HPC clusters.<p><pre><code> Behind all that marketing, the design of Larrabee was of a CPU with a very wide SIMD unit, designed above all to be a real grown-up CPU - coherent caches, well-ordered memory rules, good memory protection, true multitasking, real threads, runs Linux&#x2F;FreeBSD, etc. </code></pre> Here&#x27;s an interesting snippet, drop &quot;Larrabee&quot; from the above and replace it with &quot;Xeon&quot;. I&#x27;m just wanting the point here that the Xeon and Xeon Phi product lines are only going to look more similar over time. For a while now the regular Xeons have been getting wider. Which is to say more cores and fatter vector units. The Xeon Phi line started out with more cores but is having to drive the performance of those cores up. Over time we&#x27;re going to see one line get features before the other e.g. AVX512 is in KNL and will be in a &quot;future Xeon&quot; (Skylake). And it&#x27;s widely rumored that &quot;future Xeons&quot; will support some form of non-volatile main memory, something not on any Xeon Phi product roadmap today. The main differentiator will be that Xeon products need to continue to cater to the mass market whereas Xeon Phi can do more radical things to drive compute intensive workloads (HPC).<p><pre><code> Larrabee, in the form of KNC, went on to become the fastest supercomputer in the world for a couple of years, and it&#x27;s still making a ton of money for Intel in the HPC market that it was designed for, fighting very nicely against the GPUs and other custom architectures. </code></pre> Yes at the time Tianhe 2 made it&#x27;s debut it was largely powered by KNC and it placed at number 1 on the top 500.<p>Did KNC as a product make a lot of money for Intel? Probably not. Yes they shipped a lot of parts for Tianhe 2 but for other HPC customers, not so much. At the time putting a KNC card against a Sandy Bridge CPU it wasn&#x27;t leaps and bounds faster. Yes you&#x27;re always going to have to re-factor when making such a change in architecture but the gains just weren&#x27;t worth it in most cases. Not to say there haven&#x27;t been useful deployments of KNC that have contributed to &quot;real science&quot;, but it was not a wild success by any measure.<p>As for &quot;fighting very nicely against the GPUs and other custom architectures&quot;, I don&#x27;t think so. When KNC got to the market Nvidia GPUs already had a pretty well established base in the HPC communities. Many scientific domains already had GPU optimized libraries users could pick off the self and use. Whilst KNC was cheaper than the Kepler GPUs on the market at the time it wasn&#x27;t much cheaper. Again back to regular Xeons, for a cost&#x2F; performance benefit KNC wasn&#x27;t worth it for a lot of people.<p><pre><code> Its successor, KNL, is just being released right now (mid 2016) and should do very nicely in that space too. </code></pre> I do agree with this, KNL is going to do very well. There are whole systems being built with KNL rather than large clusters with some KNC or some GPUs.<p><pre><code> 1. Make the most powerful flops-per-watt machine. </code></pre> Not sure what is mean here, most efficient machine in the top 10? Tianhe 2 is not an efficient machine by any stretch of the imagination. For reference here&#x27;s the top 10 of the Green 500 for when Tianhe 2 placed as number 1 on the Top 500.<p><a href="http:&#x2F;&#x2F;www.green500.org&#x2F;lists&#x2F;green201306" rel="nofollow">http:&#x2F;&#x2F;www.green500.org&#x2F;lists&#x2F;green201306</a><p><pre><code> SUCCESS! Fastest supercomputer in the world, and powers a whole bunch of the others in the top 10. Big win, covered a very vulnerable market for Intel, made a lot of money and good press. </code></pre> Yes KNC provides the bulk of the computing power for Tianhe 2 but at the time of writing of the above is number 2, not number 1. Secondly that is the ONLY machine in the top 10 that uses KNC AT ALL.
New Brazilian Banking Trojan Uses Windows PowerShell Utility
I&#x27;m having trouble understanding what the OP is describing:<p>It looks like the OP did not make clear just how receiving the malicious e-mail could lead to a malware infection (<i>computer virus</i>).<p>So, in more detail, the OP wrote<p>&gt; The banking Trojan is being delivered via a phishing campaign where emails are masquerading as a receipt from a mobile carrier. A malicious .PIF (Program Information File) attachment is used to attack the target’s PC. PIF files tell MS-DOS applications how to run in Windows environments and can contain hidden BAT, EXE or COM programs that automatically execute after the host file is run.<p>So, below I go step by step through how standard Internet e-mail works and, thereby, show that it&#x27;s not easy to get a computer virus infection just from receiving malicious e-mail. In particular, for the OP, I don&#x27;t see how there could have been a virus infection. So, it appears that the OP omitted one or more operations or steps taken by a user and necessary to get such an infection. For better understanding of system security, I want to know what those steps could be.<p>=== Getting Started<p>So, someone receives some e-mail with an e-mail attachment that is a .PIF file. Okay.<p>=== E-Mail 101<p>Early in my use of the Internet and e-mail, which was also early in the growth of the commercial Internet, I read several of the more important standards documents. So I was reading some of the Requests for Comments (RFCs) of the Internet Engineering Task Force (IETF). So I read some RFCs for Simple Mail Transfer Protocol (SMTP). For e-mail attachments, I read about the Multimedia Internet Mail Extensions (MIME).<p>The e-mail software I had was awful, e.g., it put a separate icon on my screen for each e-mail message sent or received! Outrageous! So, long before 33,000 e-mails, say, to pick a random number, the screen would become relatively full! So, I just wrote my own e-mail software.<p>Writing such e-mail software is actually surprisingly easy to do: Basically in such software are just using the standards in the RFCs for Post Office Protocol 3 (POP3). So, to do this, it&#x27;s enough to write some relatively simple software that communicates over the Internet the usual way, that is, using the RFCs for Transmission Control Protocol&#x2F;Internet Protocol (TCP&#x2F;IP).<p>And as an alternative and without writing any software at all, it&#x27;s possible to be even simpler: Just use the standard, interactive, command line, TCP&#x2F;IP application program Telnet to send or receive a file, the file having the e-mail.<p>With such approaches to sending&#x2F;receiving e-mail, most possibilities for mystery about e-mail disappear. In a word, that approach, POP3, to e-mail is <i>simple</i>. In two words, it&#x27;s <i>dirt simple</i>. In the sense of computer security, it should be about as safe as anything could be; if there is a security problem, then it is not from the little TCP&#x2F;IP or Telenet usage but would have to be in other hard&#x2F;software.<p>=== E-Mail 102<p>So, in particular, with attachments or not, that is, with MIME <i>parts</i> or not, an e-mail message as sent or received is just a string of bytes, 8-bit bytes, just bytes. Computers do well handling strings of bytes, just any bytes at all, safely, that is, without getting infected with a computer virus.<p>=== My DIY E-Mail Software<p>For the e-mail software I wrote, I had a file on my hard disk for receiving e-mail. As my e-mail software received an e-mail message, the software just wrote (appended) the bytes of the e-mail message to the end of that file. Then to read the e-mail, I could just use my favorite text editor to read the file.<p>Then, for security, the text editor was just reading a file of bytes, just bytes, just 8-bit bytes, and trying to display them.<p>Yes, when the file got too big, I started another one. Actually, I still have that old e-mail software and those old e-mail files. So, looking at that old work, I got 234 such e-mail files from about 4&#x2F;1997 to 8&#x2F;2005. The average file size was 1,137,251 bytes. The total number of e-mail messages was 29,978 (not quite the famous 33,000). So, I used my little e-mail software for 8 years. It worked quite well.<p>=== A Nice Feature<p>There was a nice feature: Each e-mail message sent or received could be identified by just the file and a time-date stamp within the file. So, elsewhere in my work, that pair served as a link or pointer to individual e-mail messages. Then, given such a link, anywhere on my system, in my favorite editor I could give just one keystroke and see the e-mail message. Nice.<p>=== My DIY E-Mail Security<p>For more on security, I very much doubt that there exists a stream of bytes that, when read by my favorite text editor, will result in a virus infection. So, so far, when receiving e-mail with the software I wrote, I see no way my computer could get a virus infection. That is, sure, maybe there is a lot of malicious software in attachments in some e-mail I received, but that software would just be written as bytes to that file my e-mail software appended to. There those malicious bytes would be and remain, harmlessly. The malicious bytes could be read by my favorite editor, all with full safety.<p>=== My DIY E-Mail and MIME<p>Okay, there is more we should understand: In the RFC for MIME, an <i>attachment</i> can be a file of some image, some sounds, some video, from a spreadsheet, an Adobe Portable Document Format (PDF) file, some quadruple encrypted, triple tricky, official Sky Captain National Security Protocol, total secrets of the universe file. But with the MIME standards, each such attachment, as sent or received, is just a very non threatening, essentially harmless, really innocuous, string of total gibberish characters in the long standard 7-bit American Standard Code for Information Interchange (ASCII) printable characters.<p>Exercise: Look up the exact list and see that exactly 65 such printable characters are used.<p>Answer: The characters are:<p>ABCDEFGHIJKLMNOPQRSTUVWXYZ<p>abcdefghijklmnopqrstuvwxyz<p>0123456789<p>+&#x2F;<p>So, there are exactly 64 characters. The character &#x27;=&#x27; not in that list is used at the end of an encoding to handle the situation that the number of bits in the input data is a multiple of 8 while the number of bits represented by the 64 characters is a multiple of 6.<p>Right, to send an e-mail attachment, take the string of bytes of the attachment and convert it to those 64 printable characters followed possibly by one or two &#x27;=&#x27; characters. Right, just how to do this is called <i>base 64 encoding</i>. And in the exercise, we saw where the 64 comes from.<p>So, in e-mail, as it is received, each attachment is just a gibberish string of these 64 printable ASCII characters with possibly some characters &#x27;=&#x27; at the end.<p>So, all those characters, being so simple, are easy to view with common text editors with, yes, lots of ASCII gibberish but no risks or dangers. And, for the users&#x27; viewing pleasure, the gibberish is formatted into separate lines with, &lt;= 76 characters per line.<p>Then, sure, if have received e-mail with a MIME attachment, can do a base 64 <i>decode</i> to get back to the string of bytes for an image, audio recording, video clip, PDF file, etc.<p>To convert to&#x2F;from base 64 encoding is easy; writing the software is an easy, elementary programming exercise. Long ago I wrote such software in the old IBM interpretive language Rexx.<p>=== My DIY E-Mail, MIME, and Security<p>So, so far in our handling of e-mail, we have done nothing dangerous or risky; really, there is no reasonable way that receiving e-mail could get our computer a virus infection.<p>Well, eventually attachments became more common in e-mail. So, right, for such attachments, with my home-brew e-mail software, I could receive e-mail, use my editor to put the base 64 gibberish of each attachment in its own file, on each such file run my little Rexx software for base 64 decoding, and do what I was willing to do, what risks I was willing to take, with the resulting decoded files.<p>So, for an example, if the e-mail claimed that some attachment was intended to have file extension JPG, then I could make the extension of the file JPG and give the file to, say, the standard Windows Picture and FAX Viewer. If the file was really a JPG file, then the viewer should so confirm and just display the image. Else the viewer should report that the file was not a legal JPG file. In either case, there should be no harm done, no matter what the heck was really in the file. Here, of course we would have to trust the Windows Viewer -- hopefully we could do that.<p>=== E-Mail is Risky?<p>Yes, long early in the commercial Internet and e-mail, there were claims that receiving a malicious e-mail could easily result in a virus infection. Considering my understanding of e-mail as above and hearing such virus claims, I was ready to scream HOW? How the heck could receiving e-mail be risky?<p>That is, according to the e-mail standards as above, e-mail received is just a string of bytes. The bytes are all printable. Attachments are also printable because they are in base 64 encoding as above.<p>So, such e-mail received is just harmless. Put that e-mail and&#x2F;or its attachments in some files, and they are still harmless.<p>=== The OP<p>So, from the e-mail standards reviewed above, it appears that the OP omitted one or more operations or steps taken by a user and necessary to get such an infection.<p>Any ideas what those operations or steps were?
Kung Fu, Once Central to Hong Kong Life, Is Waning
There&#x27;s a lot of disinformation about martial arts in any online discussion, especially because people are obsessed with determining which martial arts is &quot;the best&quot;. That&#x27;s a sort of reductionist argument that never takes into context the history of each martial art and why&#x2F;where it is effective or ineffective.<p>Let&#x27;s start with some history: Judo was never heavily practiced historically, because you had a sword. Judo techniques were used as a last resort in case you lost your weapon. In a society where the peacekeepers carry weapons, you are at a serious disadvantage with any kind of unarmed combat, be it BJJ (which I practice, for sport), Muay Thai (which I practiced briefly and is effective in 1:1 situations in a stand-up and clinch), even Krav Maga (I tried this), or Chinese martial arts.<p>So here&#x27;s an argument for why certain martial arts are &quot;better&quot; than others, depending on the situation and the era you live in:<p>Kendo&#x2F;sword martial arts - you are likely going to war. The Chinese martial arts fall under this category, where you can quickly use certain forms to train thousands of troops at once to be &quot;more ready&quot; in general warfare than your enemy. Your troops will have slightly more conditioning, and practicing forms lets you have a 1:many trainer to student ratio. Even if the martial arts are not the best for close quarter 1:1 combat, or ground fighting, it makes no sense to spend time training your troops for these situations if the odds are VERY stacked against them in a wartime situation.<p>Krav Maga is a great martial art to teach for modern, urban self defense because the movements are basic. If you might end up in a bar fight, for instance, Krav Maga teaches something called stacking, where your goal when you have to maximize your chances of getting away from multiple attackers is to quickly subdue one attacker, then using head control, using that attacker&#x27;s body to defend your own while you create space so you can escape. There are more advanced techniques that teach knife or gun defense if you are caught off guard, but it should be noted that you are probably at a disadvantage unless you are highly skilled if someone with a weapon attacks you! Training Krav Maga takes a bad probability of survival and turns it into a less bad probability.<p>Back to Judo: judo and karate took off as effective when peacekeepers could not carry weapons, and had to learn to effectively do their jobs without weapons. The key here is that society had to chance so that weapons were only carried by the warrior caste.<p>Filipino martial arts - unarmed, sticks, knives. No swords or spears, because this isn&#x27;t what they went to war with. Surprisingly effective now, because you are likely to have a cane or umbrella, and you will have a slightly better chance defending yourself against a knife attack, which a modern attacker is more likely to have than a sword or a spear.<p>Now where does Brazilian Jiu Jitsu, which many people talk about, fit in? Brazilian Jiu Jitsu is an extremely effective martial art for one-on-one, unarmed confrontations. For children, if a bully throws you to the ground and gets on top of you and starts to hit you, you can escape and get away. If ONE PERSON attacks you in public without a weapon and you both go to the ground, you can probably subdue or escape if you have practiced BJJ. If two people attack you, or if it&#x27;s one group on another group, BJJ is probably not so effective. I saw a version of MMA on TV that was effectively a 5v5 team fight, and it quickly turned into who could tap or KO a another member first, then it became a 5v4, and 5v3, and so on. The effectiveness of BJJ in group fights is greatly diminished - now we are back to martial arts like Krav Maga where you might have learned to group fight or even the Chinese martial arts.<p>BJJ and Muay Thai have really taken off because of MMA&#x2F;UFC. The rules used to heavily favor BJJ practitioners, who in that particular environment, thrive. Karate practitioners were not traditionally used to ground fighting, and the Jiu Jitsu fighters would stick to their strategy of taking someone to the ground and quickly submitting them. UFC is a 1:1, no-hitting-eyes-or-neck-or-groin fight. No stomping. In the early era of UFC, there weren&#x27;t any rounds. This GREATLY favors BJJ practitioners, who can simply hold someone indefinitely on the ground and tire them out. If you&#x27;ve never grappled before, even if you are in great shape - HOLY MOLY prepare to get wiped out in minutes.<p>Many of you may disagree with some of the details of my interpretations, but the general spirit of my argument is this: different martial arts thrive in different eras because society changes. Claiming one martial art is better than another without adding context is disingenuous.
Amazon Vehicles
A number of people have commented about the quality (or its lack) of Amazon&#x27;s &quot;this fits&#x2F;doesn&#x27;t fit your car&quot; data. Let me explain a little about how that works:<p>In North America, this fitment data conforms to a popular (but not the only) schema called ACES (Aftermarket Catalog Exchange Standard). It&#x27;s a long and pretty well-defined XML schema that specifies things like year, make, model, and all kinds of attributes like engine configuration, fuel type, wheelbase, etc., as well as brand name, part number, quantity, and so on. Fitment data providers create an XML document according to this schema and populate it based on the products they say fit particular vehicles.<p>For example, say you are the manufacturer of a FRAM oil filter (FRAM is a common aftermarket filter brand in the US). It has a part number A123. You know it fits the 2000-2010 Honda Accord with engine XYZ. You add to your fitment XML document an entry for this filter that specifies brand=FRAM, partnumber=A123, parttype=oil filter, years=2000-2010, make=Honda, model=Accord, engine=XYZ. Now, you (or some 3rd party you designate that specializes in turning your catalog of products into fitment data), sends this fitment document on to companies that care about fitment information because they are selling your FRAM oil filter and want to be sure their customers can tell if it will fit their car or not (Amazon, Ebay, Rock Auto, etc.).<p>They take that fitment data and join it against their product database and out pops the yes&#x2F;no fitment data you see on their website.<p>Now scale it up: there is no requirement that only one company produce these fitment records. Anyone else can produce a fitment record that says all the above, but for the 2000-2010 Honda Civic. Maybe that&#x27;s a mistake, but as a receiver of the fitment data, Amazon or Ebay can&#x27;t know it&#x27;s a mistake--they can only presume the fitment data they are given is valid.<p>Now, complicate it further by adding the human element: e.g. some fitment data providers have fitment data in Excel spreadsheets. Some poor human fat-fingered that data from the spreadsheet into XML and maybe they left off the leading 0 on all the part numbers (because that&#x27;s Excel&#x27;s default for number cells). Oops. Now none of those fitment records will match to any parts in the database of the companies that sell them. Or they&#x27;re entering this data off a piece of paper and can&#x27;t tell if that&#x27;s a 0 (zero) or an O (oh). Oops.<p>Or, worse: the fitment provider gets the wrong vehicle ID (because the schema is all based on IDs, not human-readable names) and submits fitment data that says that FRAM filter fits a 2000-2010 Tesla Model S. Well, that&#x27;s extremely unlikely, but the receiver of the fitment data is a machine and the machine doesn&#x27;t know that is completely ridiculous.<p>Or equally bad: the fitment provider says it fits a 2000-2010 Honda Accord, but doesn&#x27;t specify the engine type at all. Now, Amazon&#x27;s machines see that and think &quot;the customer only needs to tell me their year, make, and model&quot;. A smart human knows it also needs the engine configuration, but the machine can&#x27;t easily know that because none of its data specifies an engine configuration is needed for fitment. So Amazon sends out the filter because its data says &quot;it fits!&quot; and the customer is unhappy.<p>So, in the end, the customer is displeased because Amazon shipped them an oil filter that can&#x27;t possibly fit their car, even though it told them it would because they can only go on what the data tells them.<p>A closely related problem is that sometimes a seller will have a product that they know should require fitment data (like an oil filter), but there has been no fitment data submitted for it. In that case, the company can neither say it fits, nor it doesn&#x27;t fit--it doesn&#x27;t have enough data to make the determination. This is common as the new model-years start to hit the marketplace: the car exists; you can buy it; you can buy parts for it; however, the aftermarket fitment data hasn&#x27;t caught up with the car&#x27;s attributes yet. It also happens when Amazon has fitment data for some models but not for others, even if the part will fit those other models--without data saying so, there&#x27;s no way to say &quot;yes that will fit your new 2017&quot;.<p>To complicate it even further, if you&#x27;re talking about this stuff outside of North America, there are other schemas and data providers with very little overlap of the NA offerings. This is visible in the very different fitment experience you see in most EU countries on Amazon (for example, try <a href="https:&#x2F;&#x2F;www.amazon.co.uk&#x2F;auto" rel="nofollow">https:&#x2F;&#x2F;www.amazon.co.uk&#x2F;auto</a> and select a car).<p>The takeaway? Unhappy customers result when you have complex data quality problems, and sometimes it&#x27;s no fault of the implementation at all.
Ask HN: How do you handle DDoS attacks?
Post was too long <a href="http:&#x2F;&#x2F;pastebin.com&#x2F;48J9Ufdd" rel="nofollow">http:&#x2F;&#x2F;pastebin.com&#x2F;48J9Ufdd</a> :&lt;<p>Random &quot;wisdom&quot;, not in any particular order more like do&#x27;s and dont&#x27;s that I picked up with dealing with and executing DoS&#x2F;DDoS attacks.<p>Testing, testing, testing, regardless of how you choose and what you implement your mitigation test it and test it well because there are a lot of things you need to know.<p>Know and understand exact effect that the DDOS&#x2F;DoS mitigation has, the leakage rate, what attacks can still bring you down, and the cost of mitigation.<p>Make sure you do the testing at different hours of the day if not you better know your application and every business process very well because I&#x27;ve seen cases where 50GB&#x2F;s DDoS would do absolutly nothing except on tuesday and sunday at 4AM when some business batch process would start and the leakage from the DoS attack + the backend process would be enough to kill the system. Common processed that can screw you over are backups, site to site or co-location syncs&#x2F;transfers, various database wide batches, pretty common times for this anything in early morning, end of weak, end of month, end of quarter etc.<p>If you are using load or stress testing tools on your website make sure to turn off compression it&#x27;s nice that you can handle 50,000 users that all use GZIP but the attackers can choose not too.<p>Understand what services your website&#x2F;service relies on for operation common things are services like DNS, SMTP etc. if I can kill your DNS server people can&#x27;t access your website, if i can kill services that are needed for the business aspect of your service to function like SMTP I&#x27;m effectively shutting you down also.<p>If you are hosting your service on Pay As You Go hosting plans make sure to implement a billing cap and a lot of loud warnings, your site going down might not be fun, but it&#x27;s less fun to wake up to a 150K bill in the morning, if you are a small business DoD&#x2F;DDoS can result in very big financial damages that can put you out of business.<p>Understand exactly how many resources each &quot;operation&quot; on your website or API costs in terms of memory, disk access&#x2F;IOP&#x27;s, networking, DB calls etc, this is critical to know where to implement throttling and by how much.<p>If you implement throttling always do it on the &quot;dumber&quot; layer and the layer that issues the request for example if you want to limit the amount of DB queries you execute per minute to 1000 do it on the application server not on the DB server. This is both because you always want to use &quot;graceful&quot; throttling which means the requesters chooses not to make a request rather than the responder having not to respond, and it also allows you to implement selective throttling for example you might want to give higher priority to retrieving data of existing users than to allow new users to sign up or vice versa.<p>Do not leak IP address this is both in regards to load balancing and using scrapping services like Cloudflare. When you used services like cloudflare make sure that the services you protect are not accessible directly, make sure some one can&#x27;t figure out the IP address of your website&#x2F;API endpoint by simply looking at the DNS records. Common pitfalls are www.mysite.com -&gt; cloudflare IP while mysite.com&#x2F;www1.mysite.com&#x2F;somerandomstuff.mysite.com reveal the actual IP address. Another common source is having your IP address revealed via hard coded URLs on your site or within the SDK&#x2F;documentation for your API. If you have moved to cloudflare &quot;recently&quot; make sure that the IP address of your services is not recorded somewhere there are many sites that show historic values for DNS records if you can it is recommended to rotate your IP addresses once you sign up for a service like cloudflare and in any case make sure you block all requests that do not come through cloudflare.<p>When you do load balancing do it properly do not rely on DNS to for LB&#x2F;round robin if you have 3 front end servers do not return 3 IP addresses when some one asks whois www.mysite.com put a load balancer infront of them and return only 1 IP address. Relying on DNS for round robin isn&#x27;t smart it never works that well and you are allowing the attacker to focus on each target individually and bring your servers one by one.<p>Do not rely on IP blacklisting and for whatever reason do not ever ever ever use &quot;automated blacklisting&quot; regardless of what your DDoS mitigation provider is trying to tell you. If you only service a single geographical region e.g. NA, Europe, or &quot;Spain&quot; you can do some basic geographical restrictions e.g. limit access from say India or China this might not be possible if you are say a bank or an insurance provider and one of your customers has to access it from abroad. Ironically this impacts the sites and services that are the easiest to optimize for regional blocking for example if you only operate in france you might say ha! I&#x27;ll block all non-french IP address but this means that what an attacker needs to do is simply use IP spoofing and go over the entire range of French ISP&#x27;s and you blacklist all of France this only takes a few minutes to achieve! If you are blacklisting commercial service provider IP&#x27;s make sure you understand what impact can it have on your site, blacklisting DigitalOcean or AWS might be easy but then don&#x27;t be surprised when your mass mail services or digital contract services stop working. If you do use some blacklisting &#x2F; geoblocking use a single list that you maintain do not just select &quot;China&quot; in your scrapping service, firewall, router, and WAF all of them can have different Chinas which causes inconsistent responses, use a custom list and know what is in it.<p>Do not whitelist IP! I&#x27;ve seen way too many organizations that whitelist IPs so those IPs would not go for example through their CDN&#x2F;Scrapping service or would be whitelisted on whatever &quot;Super Anti DDoS Appliance&quot; the CISO decided to buy into this month. IP spoofing is easy! drive by attacks are easy! And since a common IPs to whitelist are things like your corporate internet connection nothing is easier for an attack to do than to figure those out. They simply need to google for the network blocks assigned to your organization if you are big enough and or were incorporated prior to 2005 or send a couple of 1000&#x27;s of phishing emails and get do some sniffing from the inside.<p>Understand collateral damage and drive by attacks. Know who (if) you share your IP addresses with and figure out how likely they are to be attacked, yes everyone would piss some one with keyboard access these days but there are plenty of types of businesses that are more common as targets, if you are hosting in a datacenter that also provides hosting for a lot of DDoS targets you might suffer also. For drive by attacks you need to have good understanding of the syndication of your service and if you are a B2B service provider your customers. If you provide some embedded widget to other sites if they are being DDoSed you might get hit also if it is a layer 7 attack. If you are providing service for businesses for example an address validation API you might get hammered if one of your clients is being DDoSed and the attacker is hitting their sign up pages.<p>Optimize your website; remove or transfer large files things like documents and videos can be moved to various hosting providers (e.g. YouTube) or CDN&#x27;s, if you are hosting large files on CDN&#x27;s make sure they are only accessible via the CDN, infact for the most part it&#x27;s best if you make sure that what is hosted on the CDN is only accessible via the CDN this prevents attackers from accessing the resources on your own servers via selecting your IP instead of the CDN. A common pitfall would be that some large file is linked on your website as cdn1.mysite.com&#x2F;largefile but it&#x27;s also accessible directly from your servers via www.mysite.com&#x2F;largefile.<p>Implement anti-scripting techniques on your website, captcha, DOM rendering (makes it very expensive for the attacker to execute layer 7 attacks if they need to render the DOM to do so) and make sure that every &quot;expensive&quot; operation is protected with some sort of anti-scripting mechanism. Test this! captchas that are poorly implemented are no good, and I don&#x27;t mean captchas that are somehow predictable or easy to read with CV&#x27;s if you have a services that looks like this LB&gt;Web Frontend&gt;Application Server&gt;DB make sure that the captcha field is the 1st thing that is being validated and make sure it&#x27;s validated in the web frontend or even in the LB&#x2F;Reverse Proxy. If you hit the application server validate all the fields do the thing and just before sending it to the DB you validate the captcha this won&#x27;t help to protect you against DoS&#x2F;DDoS as well if at all.<p>When you implement any mitigation design it well and understand leakage and &quot;graceful failure&quot;, it&#x27;s better for the dumb parts of your service to die and restart than it is for the more complicated parts. For example if after all of your mitigation you still have 10% leakage from your anti-ddos&#x2F;scraping service to your web frontend and from it there is a 5% leakage to to your DB do not scale the web frontend to compensate for the leakage from your scrapping service to the point of putting your DB at risk. A web server going down is mostly a trivial thing as it would bring itself back up usually on its own without any major issues, if your DB gets hammered well it&#x27;s a completely different game you do not want to run out of memory or disk and to have to deal with cache or transaction log corruption or consistency issues on the DB. Just get used to the fact that no matter what you are going to do and implement if some one wants to bring you down they will, do what you can and is economical to you do mitigate against certain attacks and for the reset design your service with predicted points of failure that would recover on their own in the most graceful manner and shortest period.
How I Built a Custom Camper Van (2015)
I&#x27;ve spent 11 of the past 20 years &quot;homeless&quot; by choice following various practices from living on a boat, to living in a truck camper, to traveling the world living in AirBnBs, to occasionally renting apartments but never really living there. But I&#x27;ll come back to that.<p>I want to address several peoples concerns about this guys lifestyle and the presumed limitations:<p>0. First off Loved that he was using Soylent. That solves a big problem of needing dried food but not liking freeze dried food. If I were to go back to vehicle living I would use a combo of Soylent and Sous Vide. Sous Vide cookers like the Anova are very small, and you can do it just with boiled water, zip lock bags and a thermometer if you want. The results are really fantasic. 30 seconds searing steaks on the grill then 40 minutes in the bath and you have better steaks than you can get at any restaurant for less than $50-- and you can do that on top of am mountain if you wanted! So the food situation is much better than the days of crates of raman.<p>1. Sex. Sex is totally possible, and it&#x27;s not creepy at all. When you get on the road and you&#x27;re traveling you will run into people who are going the same route multiple times. In this way there&#x27;s a virtual community. This varies regionally of course, travel by train in europe or in alaska for the summer and it becomes pretty tight nit. The women and men you meet there are not exactly going to turn their nose up at your van because that&#x27;s how they are traveling to. There&#x27;s a whole vagabond subculture in the USA that ranges from kids hoping trains to techies in vans like this guy to Oldsters in RVs. And there&#x27;s nothing sexier than a guy who will break with convention and go do interesting things. FTR, my partner and I picked up a woman in the UK who then travelled with us and lived with us for a couple years in poly triad. IT only lasted three years but I don&#x27;t think the definition of a successful relationship should only be ones that end in death!<p>2. Cost- you really can save a lot of money. IT&#x27;s amazing that you can live around the world traveling full time for less than the cost of living in a major west coast city. If you&#x27;re doing a startup, that&#x27;s really nice- be in berlin, then go to london, etc. We ran a three person startup (the triad above) going form england to Romania to Chile. While we didn&#x27;t live as cheaply as we should have or could have (it&#x27;s a skill) we didn&#x27;t live more expensively than we would have if we stayed in Seattle (and we never would have met the woman in the UK). When it costs less or doesn&#x27;t cost more but you have a better experience, isn&#x27;t that a much better value?<p>3. The major factor is movement. When you&#x27;re still- say at a campground or an AirBnB, or anchored at a dock, you save your movement energy, and thus cost, and you spend time working and enjoying. When you&#x27;re underway- sailing requires attention as does driving, taking trains and planes costs money, boats and cars take gas. The ideal situation is one where you can stay places for a period of time (we used to stay in a country 90 days- the visa limit) to maximize your productivity on the road. This is a lifestyle, not a vacation from life. You earn money when you go, but you earn less money on tavel days.<p>4. Settling in- another part of the cost of travel is the settling in time. I need to have a good work chair and in each country we would spend the first week or so getting our spot set up to be productive on our startup.<p>5. The best thing about traveling is meeting the locals- especially outside the USA. This is the reason for the 90 day visa too. You can build real relationships. 4 countries in a year is much better than 9 countries in 4 days! And it&#x27;s cheaper per-day, because you can be working during the day, and thus it&#x27;s sustainable.<p>6. There are many ways to do it. I like the boat the best- it was only 30 feet but it was center cockpit and huge. If I had the balls of a blue water sailor I never would have left and would be traveling around the world in it. But it takes a rare breed to cross an ocean in a 30 foot cruiser!<p>This van is very much like my experience in the Truck Camper. The truck camper cost me $5,500 all in- an old Toyota Pickup and a $3,500 SKAMPER. You have to crank it to raise the roof. I travelled all the way to Prudhoe Bay in that truck- spending a couple weeks north of the arctic circle.<p>You can never forget an experience like that!<p>7. Eventually I vowed to never stop. I decided this was a philosophy and whatever methodology it doesn&#x27;t really matter. Am I still traveling full time? I&#x27;m on a lease, so many of you would say no, but I think I am. You could be too.<p>What&#x27;s the difference in lifestyle between crashing in a French student&#x27;s flat in Romania for 3 months and being on a lease in the USA for 6? In romania 90 days is the max visa and maximizing productive time was ideal. a 6 month lease in the USA isn&#x27;t that different from the 6 months we lived in the UK (they have a longer visa for US residents).<p>I now think in terms of the GPWR - Gross Personal Weight Rating. That is the total weight of me and all my possessions. When I was on the boat it was around 13,000 pounds - most of it boat. For the truck it was about 7,000 pounds, most of it truck.<p>When we were backpacking it was all in the pack- about 60 pounds. Now I am staying in apartments but restrict myself to only what can fit in my car (so I can move across country at a moments notice if I want.) I don&#x27;t live in the car so it&#x27;s a tradeoff, I have to rent a sleeping space.<p>But I&#x27;m still mobile. I don&#x27;t have a bed frame, for instance, I bought a bunch of Akro Mils plastic crates. Turn them upside down and they make a really damn solid bed frame (best one I&#x27;ve ever had, actually) The mattress fits in the back of my car with the seats folded down. I have a mid sized SUV and camping is easy- just put the mattress in the car. Better than a tent (stays warmer). But when I need to move, I can turn the crates right side up and all my possessions go into them.<p>So, where should I live next? Once my lease is up, I&#x27;m going. (and knowing that also puts the kibosh on silly buying.)<p>Start thinking of every possession as weight added to your GPWR. Do you want to live in backpack? Pare down. Do you want to live in a van? You don&#x27;t have to be as careful but you should think about how many TVs you buy.
Victory for Net Neutrality in Europe
For your convenience, here&#x27;s just the text &quot;in the boxes&quot; (the Recitals), from <a href="http:&#x2F;&#x2F;berec.europa.eu&#x2F;eng&#x2F;document_register&#x2F;subject_matter&#x2F;berec&#x2F;download&#x2F;0&#x2F;6160-berec-guidelines-on-the-implementation-b_0.pdf" rel="nofollow">http:&#x2F;&#x2F;berec.europa.eu&#x2F;eng&#x2F;document_register&#x2F;subject_matter&#x2F;...</a><p>These are the first 9, the other 10 are here: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;daveloyall&#x2F;a1112bb70412d77bebc8090906769498" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;daveloyall&#x2F;a1112bb70412d77bebc809090...</a><p>Recital 1 =========<p>This Regulation aims to establish common rules to safeguard equal and non-discriminatory treatment of traffic in the provision of internet access services and related end-users’ rights. It aims to protect end-users and simultaneously to guarantee the continued functioning of the internet ecosystem as an engine of innovation.<p>Recital 2 =========<p>The measures provided for in this Regulation respect the principle of technological neutrality, that is to say they neither impose nor discriminate in favour of the use of a particular type of technology.<p>Recital 3 =========<p>The internet has developed over the past decades as an open platform for innovation with low access barriers for end-users, providers of content, applications and services and providers of internet access services. The existing regulatory framework aims to promote the ability of end-users to access and distribute information or run applications and services of their choice. However, a significant number of end-users are affected by traffic management practices which block or slow down specific applications or services. Those tendencies require common rules at the Union level to ensure the openness of the internet and to avoid fragmentation of the internal market resulting from measures adopted by individual Member States.<p>Recital 4 =========<p>An internet access service provides access to the internet, and in principle to all the end-points thereof, irrespective of the network technology and terminal equipment used by end-users. However, for reasons outside the control of providers of internet access services, certain end points of the internet may not always be accessible. Therefore, such providers should be deemed to have complied with their obligations related to the provision of an internet access service within the meaning of this Regulation when that service provides connectivity to virtually all end points of the internet. Providers of internet access services should therefore not restrict connectivity to any accessible end-points of the internet.<p>Recital 5 =========<p>When accessing the internet, end-users should be free to choose between various types of terminal equipment as defined in Commission Directive 2008&#x2F;63&#x2F;EC (1). Providers of internet access services should not impose restrictions on the use of terminal equipment connecting to the network in addition to those imposed by manufacturers or distributors of terminal equipment in accordance with Union law.<p>Recital 6 =========<p>End-users should have the right to access and distribute information and content, and to use and provide applications and services without discrimination, via their internet access service. The exercise of this right should be without prejudice to Union law, or national law that complies with Union law, regarding the lawfulness of content, applications or services. This Regulation does not seek to regulate the lawfulness of the content, applications or services, nor does it seek to regulate the procedures, requirements and safeguards related thereto. Those matters therefore remain subject to Union law, or national law that complies with Union law.<p>Recital 7 =========<p>In order to exercise their rights to access and distribute information and content and to use and provide applications and services of their choice, end-users should be free to agree with providers of internet access services on tariffs for specific data volumes and speeds of the internet access service. Such agreements, as well as any commercial practices of providers of internet access services, should not limit the exercise of those rights and thus circumvent provisions of this Regulation safeguarding open internet access. National regulatory and other competent authorities should be empowered to intervene against agreements or commercial practices which, by reason of their scale, lead to situations where end-users’ choice is materially reduced in practice. To this end, the assessment of agreements and commercial practices should, inter alia, take into account the respective market positions of those providers of internet access services, and of the providers of content, applications and services, that are involved. National regulatory and other competent authorities should be required, as part of their monitoring and enforcement function, to intervene when agreements or commercial practices would result in the undermining of the essence of the end-users’ rights.<p>Recital 8 =========<p>When providing internet access services, providers of those services should treat all traffic equally, without discrimination, restriction or interference, independently of its sender or receiver, content, application or service, or terminal equipment. According to general principles of Union law and settled case-law, comparable situations should not be treated differently and different situations should not be treated in the same way unless such treatment is objectively justified.<p>Recital 9 =========<p>The objective of reasonable traffic management is to contribute to an efficient use of network resources and to an optimisation of overall transmission quality responding to the objectively different technical quality of service requirements of specific categories of traffic, and thus of the content, applications and services transmitted. Reasonable traffic management measures applied by providers of internet access services should be transparent, non-discriminatory and proportionate, and should not be based on commercial considerations. The requirement for traffic management measures to be non-discriminatory does not preclude providers of internet access services from implementing, in order to optimise the overall transmission quality, traffic management measures which differentiate between objectively different categories of traffic. Any such differentiation should, in order to optimise overall quality and user experience, be permitted only on the basis of objectively different technical quality of service requirements (for example, in terms of latency, jitter, packet loss, and bandwidth) of the specific categories of traffic, and not on the basis of commercial considerations. Such differentiating measures should be proportionate in relation to the purpose of overall quality optimisation and should treat equivalent traffic equally. Such measures should not be maintained for longer than necessary.
I want to motivate you to keep trying to switch to Linux
TL;DR - people want to get things done, so support them in doing that while waving your freedom flag, instead of being stuck in the past and appearing primitive. Also, support FLOSS (free&#x2F;libre open source software) at least monetarily, if not with time and effort.<p>~~~~~<p>This article didn&#x27;t really motivate me to try to switch to Linux. In my reading, it was just a bit of fluff about looking at &#x2F;var&#x2F;log&#x2F;syslog and searching for &#x27;proper and relevant&#x27; error messages. That&#x27;s definitely not enough for a non-tech-savvy common person.<p>There was a time when I used to think that software will improve in stability and quality over time, but I&#x27;ve realized that things are so complex, so diverse and driven by so many people that we&#x27;ve actually gone far away from &quot;it just works&quot; on _every_ _single_ _platform_ (this is about common people, not tech savvy people who can edit config files, look at system logs, debug applications, fix bugs, build from source, etc.). It doesn&#x27;t matter if you like Windows more or OS X (macOS) more or Linux more - none of them are really great enough for common people to use without being annoyed or downright disrupted in accomplishing what they want to. It&#x27;s only an ever changing ranking of who&#x27;s doing worse at any point in time.<p>Coming to Linux, I find the following as deficiencies in the distributions and the applications that prevent people from adopting it on their computers more (this list is a generalization across most distributions, and I know some distributions may be vastly better than others):<p>1. Hardware drivers (video and network being important ones that people rely on) - this has improved tremendously in the last decade or so, but it still is kind of a coin toss when it comes to saying &quot;ok, I have this laptop and I&#x27;m going to install Linux on it&quot; and being confident that it would work.<p>2. Not including proprietary codecs and stuff that people actually want. Nobody wants to hunt down a codec pack or spend a lot of time just to watch a video they&#x27;ve downloaded. I&#x27;m not belittling the fight for free software and freedom, but we have to accept what people are tending toward even if we know they&#x27;re probably on a destructive or unproductive path.<p>3. The UI - this is a huge deal, really. Most Linux distributions and applications still use some really ugly fonts and ugly looking GUI elements. It&#x27;s as if we&#x27;re still stuck in 1995 admiring Windows &#x27;95 as the best thing that&#x27;s happened to GUIs. I see a similar issue with LibreOffice (which I use), where the UI is primitive looking and difficult to use (especially for those who come from using MS Office - standard menus or ribbon interface doesn&#x27;t matter). Most people around the world who now own computers are starting to have nicer screens (at least closer to HD or HD). Things start looking even more uglier on those systems.<p>4. If you want 2016 to be &quot;the year of desktop Linux&quot; [1], you have to start by admitting that this cause was lost long ago and adapt to the better presentation, practices and behaviors seen in other OSes, however primitive, deficient or restrictive those may look to you from different perspectives. Copying ideas, polish and UI paradigms is not a bad thing because most people want to get their work done and not have to learn something new as a stepping stone to getting things done (recall the Windows 8.0 fiasco over the Start button not being there and people not knowing how to shutdown their systems?). There has been a lot of progress here as well over the years on the Linux side, but the polish is definitely not there (in a comparative sense) or comes at cost (not monetarily) that end users don&#x27;t want to deal with.<p>5. The update coin toss - this is becoming quite common and a well noticed problem on Windows and OS X as well, but you never know what an update will do to your system and if you&#x27;d even be able to boot it up (this could be related to drivers too, but why would a common user even care about who&#x27;s responsible for what and where the blame should lie if something no longer works?).<p>With all these complaints, what can we collectively do? Beyond corporate sponsorships, I believe tech people should donate more money and ideas to the FLOSS (free&#x2F;libre open source software) ecosystem. Many people who have been working on these systems for decades or years are doing it from an ideological standpoint, and in my opinion, getting more people and resources to improve things does need more money to start with. Think of it as a recurring contribution to humanity, because we all know that even if Linux doesn&#x27;t rule the desktop, it does rule mobile and servers - things that we all use in some form or the other everyday.<p>[1]: <a href="https:&#x2F;&#x2F;duckduckgo.com&#x2F;?q=the+year+of+desktop+linux" rel="nofollow">https:&#x2F;&#x2F;duckduckgo.com&#x2F;?q=the+year+of+desktop+linux</a>
An account of a serious medical emergency on a transoceanic flight
I think I´m able to give some perspective and tips here. I&#x27;m commercial pilot flying long haul and I´ve had some medical incidents during my flights, including a recent suicidal lady cutting her wrists while arriving to JFK airport in NY, or a possible heart attack while in the middle of the Sahara. Also my wife is a doctor who had to help in 3 flights already.<p>If you are a physician:<p>-The cabin crew MUST help you in all the things you require, that is:<p><pre><code> ·Providing food, liquids, blankets (for free of course). ·Providing the mandatory medical kit (that can only be opened by qualified persons never by the crew on their own). ·move the passenger wherever you find appropriate (galley, the aisle, laying in several seats, etc...). ·Don&#x27;t accept any excuse regarding the medical kit, some pursers are willing to avoid the paperwork involved after opening it (this happened to my wife in an Easy Jet flight, unfortunately I was in another row taking care of the kids and didn&#x27;t know about it till the end of the flight). It must be fully stocked when opened (usually they are closed with a lock), if it&#x27;s not the company was breaking the regulations. The medical kit is a no go item (it must be present and in perfect conditions for a flight to begin). ·Request the cabin crew to keep other passengers away. People loves a good show, and is able of disgusting behaviour (like taking photos of a semi-nude patient to &quot;share&quot;, looking over the doctor&#x27;s shoulder, etc..) ·Most cabin crew are super professional and will help to the best of their capabilities, but you can always find an idiot. Don&#x27;t let them intimidate you. </code></pre> -The pilots are waiting for the instructions of the experts. From the first moment we know there is a medical emergency, we are planing for a diversion to the nearest airport, usually we&#x27;ll listen to their opinion regarding the need of an immediate hospitalization of the passenger. Although the captain has the last word, no pilot I know is willing to risk avoiding the recommendations of a doctor and face police charges for letting a passenger die for not following instructions.<p>-What I mean is if it&#x27;s clear to you that it&#x27;s a heart attack for example, and the patient needs an hospital, tell the pilot ASAP. We are flying at 8 Nautical Miles per minute, and 10-20 minutes flying away from an airport can mean up to an hour more than necessary till you are in the ground. We take the decision based on the instruction of the doctors and nurses onboard.<p>-That said, be careful to ask what city is the captain willing to land at, and what kind of medical facilities it has. If you are flying over the sea or desert, just expect up to 3-4 hours till able to land in a city with a good enough Hospital. I had a discussion with a captain cause he wanted to land in Tamanrasset, a small city in the middle of the Algerian Sahara. We had a passenger with a possible heart attack, and he wanted to land there. I told him that we needed 45 minutes to land, and then wait at 3am till we were able to disembark, an ambulance to arrive and the patient be carried to the local Hospital, that as you may imagine is less than stellar. The purser just confirmed my suspicions, as he just had the exact same case. The patient took more than 3 hours to arrive to the Tamanrasset hospital, and there was nothing there to treat him of his heart attack. So a private flight was called from Italy to evacuate him. It was much simpler and safe to wait till Malaga in Spain, just 2 and a half hours of flight away with a medialized ambulance waiting for you at the parking.<p>-The FAA list of mandatory medical kit onboard <a href="http:&#x2F;&#x2F;www.faa.gov&#x2F;documentLibrary&#x2F;media&#x2F;Advisory_Circular&#x2F;AC121-33B.pdf" rel="nofollow">http:&#x2F;&#x2F;www.faa.gov&#x2F;documentLibrary&#x2F;media&#x2F;Advisory_Circular&#x2F;A...</a><p>-You also can find that the passenger has no need of immediate hospitalization, but needs medical help once landed. The crew is able to call emergency teams to be ready once the doors open (EMTs and police)<p>-Some companies have a remote medical service available by radio or satellite phone, they are there to help with the diagnosis and treatment if necessary. But they are not infallible and they could recommend you to land in an airport that has a unsuitable Hospital(it has happened). Right now I&#x27;m not aware of any international list with the medical facilities available close to big airports.<p>-Just a recommendation, IANAL but if unfortunately a passenger dies in flight, I would not declare the decease (we are talking strictly medical causes, no aggressions, killings, etc..), keep trying to reanimate, let the EMT take care of the patient once you&#x27;ve landed and they come onboard. Depending the country a declared decease onboard means a judicial investigation, police reports, etc... that will surely take all day once you land (or more).<p>-Most usual medical emergencies onboard are faints, suffered by people with previous medical conditions. Also people drink too much or take some kind of drugs to endure the fear of flying. Also some kind of digestive problems and heart attacks happen but are less common than faints (based on my personal and friends anecdote)
Bootable NASA 'SPOC' Software on Your PC
Works on the bare metal, reading from the bare metal oxide.<p>Very quick &amp; easy.<p>Here&#x27;s a tutorial, most of this is ordinary 21st Century floppy handling, geared toward this SPoC floppy image-<p>to make the 3.5inch floppy-<p>-Make sure all necessary PC mainboard bios settings are adjusted to fully enable Floppy, Legacy, FDD, Read&#x2F;Write, USB FDD, etc.<p>optional quality ckecks- Quick format a 3.5inch floppy using any version of MS-DOS or Windows. Using your favorite disk editor, write F6 hex consecutively on every byte of the floppy over sectors 1 through 2879, leaving only sector 0 untouched in its formatted condition. Quick format the floppy again using any version of MS-DOS or Windows. Select a 3.5inch floppy which passes a thorough sector check fully without any defects. -using the WindowsME DOS version of SCANDISK.EXE while booted to the Windows98SE version of DOS is ideal for this diagnostic test. Other Win9x SCANDISK versions while booted to their Win9x DOS&#x27;s are OK too.<p>Using your favorite bit copying routine, or disk editor, copy his default image to the floppy bitwise.<p>to boot the floppy-<p>Any PC made within almost 3 decades should work, as long as it supports 3.5inch 1.44MB floppies correctly. The PC does not need a hard drive, and may not need more than 128K to 512K of memory.<p>On modern UEFI systems you will probably need to disable Secure Boot and might need to be in Legacy, Bios Compatibility mode. --you will likely revert these settings afterward in BIOS when you are completely finished using 20th Century SPoC<p>Make sure your PC bios has its setup boot order giving the appropriate floppy hardware (mainboard or USB) the highest priority. -Make sure all other necessary additional bios settings are adjusted to fully enable Floppy, Legacy, FDD, Read&#x2F;Write, USB FDD, etc. --you may or may not want to revert these settings afterward in BIOS when you are completely finished using 20th Century SPoC<p>As a performance check, there should be no problem booting to a Windows98 startup floppy. -a proper 64-bit mainboard handles this perfectly<p>Leave the SPoC floppy inserted to the drive.<p>Restart the PC.<p>It boots to the MS-DOS 3.31 on the SPoC floppy and automaticlly runs the batch file AUTOEXEC.BAT, which launches GRID.EXE \programs\spoc.run then you eventually see the Space Shuttle logo<p>to operate SPoC-<p>after the floppy boots, progress stops at the screen &quot;GMT OF LAUNCH&quot; where you set the GMT time of your launch, his defaults are- YEAR 1989 DAY 352 HOUR 23 MINUTE 46 SECOND 0 (you can change data on this screen if you want to by backspacing &amp; typing) leave his defaults -you will have to accept this by typing ALT-RETURN where SPoC asks for CODE-RETURN<p>the next screen &quot;Set time in GMT&quot; is where your actual system time is displayed, and you set your actual system time on your mainboard hardware back to the 20th Century. -this GMT System time will need to be somewhat later than the above GMT Launch time, you will be using your system to plot the course of the Shuttle starting from your point in time after the launch had taken place- DAY 354 HOUR 15 MINUTE 41 YEAR 1989 -accept this by typing ALT-RETURN where SPoC asks for CODE-RETURN <i>if &quot;CAUTION: Vector more than 2 days old&quot; message appears, the system date is more than two days later than the most recent vector data (shown on the following screen). --After you you are completely finished using 20th Century SPoC, you will have to correct your system time back to the 21st Century before your PC will be normal again.<p>the next screen &quot;M50 STATE VECTOR (KFT) WITH GMT TIME TAG&quot; has your &quot;recent&quot; Shuttle coordinates at a certain GMT time later than the launch, there is no year entry, it is assumed to be less than 2 days earlier than your newly-set 1989 system time. his defaults are- Day (GMT) 353 Hour (GMT) 0 Minute (GMT) 59 Second (GMT) 35 leave his defaults -accept this by typing ALT-RETURN where SPoC asks for CODE-RETURN &quot;Inclination&quot; &amp; &quot;Approx. orbit&quot; info appears -accept this by typing ALT-RETURN where SPoC asks for CODE-RETURN </i>if &quot;CAUTION: Vector more than 2 days old&quot; message appears, the vector date is more than two days earlier than your 1989 system time.<p>the next screen gives you the option to display the World Map, select &quot;DISPLAY WORLD MAP&quot; with the up&#x2F;down arrow keys, then ALT-RETURN where SPoC asks for CODE-RETURN - or - accept this by typing ALT-M where SPoC asks for the CODE-M shortcut<p>the next screen gives you the option to select the communication and observation features select &quot;INVOKE DEFAULT COMMUNICATION SITES&quot; with the up&#x2F;down arrow keys, then ALT-RETURN where SPoC asks for CODE-RETURN - or - accept this by typing ALT-D where SPoC asks for the CODE-D shortcut<p>now you get the &quot;live&quot; map as the Shuttle orbits in &quot;real&quot; 20th Century time.<p>to stop the map and return to its menu- type ALT-ESC where SPoC would normally want CODE-ESC -you return to the previous menu<p>to restart the map or exit to DOS 3.31- select using the up&#x2F;down arrow keys, then ALT-RETURN where SPoC asks for CODE-RETURN - or - restart map by typing ALT-M or exit to DOS by typing ALT-ESC<p>in DOS while booted to the floppy, at the A&gt; prompt-<p>restart SPoC by typing- grid \programs\spoc.run - or - a:\grid \programs\spoc.run<p>to start only the GRID emulator itself, at the A&gt; prompt type- grid - or - a:\grid<p>when within GRID- -use ALT-Q followed by ALT-RETURN to QUIT -use ALT-ESC followed by ALT-RETURN to CANCEL<p>reboot to the floppy by leaving it inserted and simultaneously holding CTRL-ALT-DEL - or - shut down from DOS using the power button on the PC<p>--After you are finished using the SPoC floppy, you will have to revert your system time back to the correct 21st Century time in BIOS before you fully boot back to your regular modern operating system.<p>--On modern UEFI systems this is also the time to return the Legacy, Bios Compatibility mode and Secure Boot settings to your preferred options for modern operating system usage.<p>hope this helps
Employee #1: Amazon
His review of &quot;The Everything Store: Jeff Bezos and the Age of Amazon&quot;<p><a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;review&#x2F;R3J863C5ZP53BA" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;review&#x2F;R3J863C5ZP53BA</a><p>&quot;I wasn&#x27;t really planning on reviewing this book, because I was mentioned in it several times and it didn&#x27;t seem appropriate. But several other people who were also mentioned in the book have already posted reviews, and in fact, MacKenzie Bezos, in her well known 1-star review, suggested that other &quot;characters&quot; might &quot;step out of books&quot; and &quot;speak for themselves&quot;.<p>I was at Amazon for the first 5 years of its existence, so I also have firsthand experience of those times at the company, and I have been a fairly close observer since I left. By and large I found Mr. Stone&#x27;s treatment of that which I know firsthand to be accurate -- at least as accurate as it is possible to be at this great a remove, and with no contemporaneous documentation of the early chaotic days or access to certain of the principals. Relying on people&#x27;s memories of nearly twenty-year-old events is of necessity somewhat perilous. Of course there are a few minor errors here and there, but I don&#x27;t have firsthand knowledge of important mistakes much less anything that appears to be intentionally misleading. But there are a few minor glitches. In my case, I can testify that I did not, in fact, have a bushy beard at age 17 when I worked at the Whole Earth Truck Store &amp; Catalog in Menlo Park. It was a publisher and seller of books and other things, not a lending library. It was in a storefront and was no longer a mobile service operating out of a truck by the time I worked there (p. 32). But I do not think this is a reason to disregard the entire book; it&#x27;s just some not terribly relevant detail the author got a bit wrong in a way that doesn&#x27;t change the story materially. MacKenzie listed one error, which didn&#x27;t seem especially awful or material to me, and then referred only vaguely to &quot;way too many inaccuracies&quot;. Without a more explicit list of mistakes it is hard to know what to make of that. Breaking news: a new 372 page book has some errors!<p>Since Mr. Stone did not have access to Jeff Bezos for this book, but had to rely on previous interviews and the accounts of others, it would be surprising if there weren&#x27;t a few mistakes regarding his thought processes. As part of my agreement to be interviewed for this book, I was allowed to read a draft of the chapter which covered the time I was there, and I offered a number of corrections, some of which Mr. Stone was able to verify and incorporate. To the extent I am quoted, my quotes are, while not complete, fair and in context. I don&#x27;t love or agree with everything that Mr. Stone wrote about me -- especially his broader conclusions regarding the circumstances of my departure from the company -- but I do think it was fair and reasonable. I am aware of at least one other interviewee who was also given a chance to check over the chapter in which his story was discussed. I obviously can&#x27;t know this, but I suspect that if Mr. Stone had been granted access to Jeff Bezos, that he would have extended a similar courtesy. I have a pretty high degree of confidence that Mr. Stone made a significant effort, and did what was in his power, to make the book accurate.<p>The irony is, of course, that by reviewing the book as MacKenzie Bezos did, she has brought an immense amount more attention to it -- there are dozens of articles referring to her review via Google News this morning -- and its sales rank has shot up considerably. The book is not a fawning hagiography, but it is also hardly a completely negative account either. It describes not only Amazon&#x27;s ultra-hardball business practices, but the better aspects of their services and products as well. To the extent of my knowledge it is a pretty realistic account, though necessarily incomplete. Of course Mr. Stone has his own point of view, and of course he does what nearly all biographers do, which is to impute thoughts and emotions to the people he writes about. It would be mighty dull reading without that, but I think readers are generally smart enough to understand that when they read biographies, especially unauthorized biographies, the author has to recreate some kind of persona to make the subject appear life-like. That doesn&#x27;t make it fiction. This was written as a business book for a popular audience anyway, not as an academic treatise, so expecting every &quot;Bezos thought...&quot; to be footnoted, or couched in hypothetical language, is not realistic.<p>Especially in comparison to the sad collection of awful books that have been written on this subject, this one is much more detailed, more interesting, and a lot more deeply reported. Sure, there is plenty more that could be written about, and maybe someday somebody will. If and when that happens, I can only hope it is also &quot;unauthorized&quot; and not sanitized by a corporate PR department, and that some real investigative journalism is done, like Mr. Stone did here.&quot;
What If Evolution Bred Reality Out of Us?
I&#x27;ve only read the linked article, not the more in-depth source mentioned there. But based on just that information, I&#x27;m deeply unimpressed by Hoffman&#x27;s work. From where I stand, his idea is both poorly informed (as in, it seems like he&#x27;s not made a credible effort to examine his premises) and, ironically, proven wrong by reality. That&#x27;s a pretty contentious statement for a non-expert to make about a presumed expert, so I&#x27;ll try to explain myself.<p>First, like some other commenters, I&#x27;m happy to concede that at a trivial level H. is quite correct. The world around us as perceived by the unaided human is mapped inside his brain to a vague, sometimes exaggerated, sometimes obscured and very often distorted image of reality. There are entire books on optical illusions; the trade of stage magic and various kinds of crime rely on systematic human misperceptions.<p>There&#x27;s an obvious, perfectly good reason for this: the human system of vision is simply not a pixel-perfect 3D camera connected to petabytes of fast digital storage, and the same applies to our other senses. Given a perfect recording of the world at least in our vicinities, abundant energy and sufficient time, we could come up with highly effective survival strategies. But the real world doesn&#x27;t afford us these luxuries, so evolution crafted us into organisms tuned for a reasonable approximation to an optimal compromise of this ideal. Thus, our mental model of the world is a crude abstraction, with survival-relevant information emphasized and other details brushed over. This is not a survival-optimized transformation of reality but a constraint-enforced one. A highly sophisticated system built on the shoestring budget that nature affords us. It&#x27;s proven to be superior at survival to many competing models but I don&#x27;t agree it works better for misrepresenting reality. Rather, it works _at all_ by necessarily sacrificing detail and accuracy in representing reality.<p>But none of this supports H.&#x27;s contention that we are blithely unaware of reality, or unable to apprehend it. There is a reality out there, and the depth and accuracy of our model is a function of how much time and energy we&#x27;re willing to expend on mapping it. A given beach, sharply defined, has a finite and very countable set of grains of sand. If we really, really cared to know, we could build machines to count them for us. Similarly, we can or could know the shape of every coastline of every continent. Some day, humanity may have high-quality reality mappings of every planet within X light-years of our solar system. We can in principle understand the function of every gene in our genomes. We don&#x27;t have to talk about how we perceive colors, because spectroscopes can tell us the exact, reproducible wavelength of every beam of light emitted by a given object. We could exchange this information with aliens having completely different bodies and brains, should we discover them, and if their science is as advanced as ours and we&#x27;re careful to define our terms and measurements on observable nature, we&#x27;d have a common understanding of that reality.<p>But how do we know that our reality is real? How do we ascertain truth? I say we can base a pretty solid epistemology on a confluence of observed phenomena. If we encounter an obstacle we can&#x27;t see through, if it&#x27;s grey in color, weighs about 6 tons, stands on 4 legs, has a long nose, occasionally moves around and eats bananas by the bushel, then we can safely assume we&#x27;ve found an elephant. If it&#x27;s a chunk of some yellow shiny solid that displaces 18 grams of water per cc, and samples drilled from arbitrary locations in it uniformly have atomic weights of X (?), melting points of Y degrees, fail to react with sulphuric acid and show a chromatographic signature consistent with that of gold, then by golly, it&#x27;s a chunk of gold!<p>As humanity, not as individual naked humans, we&#x27;ve amassed a large and ever growing body of knowledge about the world around us, and (fortunately for our sanity and our ability to make sense of reality) the properties of objects and phenomena in our environment are consistent and convergent. There are no 1 gram elephants, there is no sodium that doesn&#x27;t react violently with water, there are no snowflakes whose basic structure isn&#x27;t hexagonal. We know that our image of reality is good because we&#x27;re able to extrapolate from what we know and observe to what we haven&#x27;t observed yet, to make predictions about what we&#x27;ll observe and have those predictions prove mostly true.<p>The author and his (perhaps coincidental and unintended) idol Plantinga fail to acknowledge humanity&#x27;s ability to create models of reality of whose accuracy (within limits) we can be confident because they&#x27;re part of a huge network of mutually supporting sub-models with excellent predictive power. And, more importantly, that our ability to create such mappings is a human ability that we have evolved to have. A goodly part of this evolution is cultural rather than biological, and a goodly part of our senses are mechanical and external rather than built into our wetware, but our evolution and that of our apparatus is quite natural insofar as everything that we humans, natural beings in a natural world, are natural too and a part of nature.<p>Hoffman&#x27;s conjecture is completely, utterly wrong: Evolution has in fact &quot;bred&quot; in humans the ability to discover reality, and this ability has incidentally given us dominion, at least in the short term and for whatever that&#x27;s worth, over all other species on the planet, including our own ancestors and close cousins. Our ability to apprehend reality has made us so fit that, barring various possible disasters, we could survive the death of the Sun and Earth.<p>If Hoffman wants to support his claim that a creature who views too much and too little water as similar instances of &quot;bad amounts of water&quot; would display a higher degree of evolutionary fitness than us, I feel he has his work cut out for him.
The Absolute Insanity of Not Buying a Home When You’re Young
Summing this up be careful and do your research.<p>This is a sore topic for me. I bought a home in 2013 at 25. I did it because the rent for the attached duplex was 1600, and my mortgage was 900 at the time. Now my mortgage is 1300 and their rent is 900. I was constantly told I could sell or rent it, within five years. So I bit.<p>Five months after I bought my home a medical condition came out of remission. I lost my ability to drive, and the area was extremely difficult to live without a car. Jobs were limited without a car, etc. So I wanted to move, so since 2013 I&#x27;ve had it on the market for rent or sale.<p>&quot;Nonsense! If you want to travel and be free, then rent your house out and have someone else pay the mortgage while you’re away.&quot;<p>Renting has got to be the worst experience in my life. I had my house listed for two years, I was priced losing 300 a month with property managers. I moved 5 hours away, by transit (20 minute drive) and was living off couches. So I could be within 20 minutes of work. I owned a home and lived out of a backpack for 1.5 years. It was a massive time sink showing the property. I blew so many weekends traveling, for one or two showings.<p>A majority of the people said I was priced too high. I was just 100 above market in the area, and taking a loss each month. I finally placed tenants last December, and evicted them in April.<p>The tenants paid a single month of rent. Destroyed all my appliances, windows, floors, etc. In short they rendered me completely bankrupt, flushed my saving and shackled me with debt. I did reference checks, budget review, background checks etc. But you can&#x27;t account for the fact that they were laid off several weeks after moving in, and people react badly to that situation. They fell into drugs, using and dealing. I can&#x27;t even espouse the stress I had to deal with receiving calls from state police about my property while I was at work. I practically had panic attacks when my property managers called me. Wondering what was broken now, and seeing money just fall away from my savings. December of last year I had 35k in the bank, now im 21k in debt. Having worked for six years that should be more. But there were a number of costs, each time I tried to sell the home, sprucing and cleaning it up.<p>It doesn&#x27;t end the other day I finally got the water company to send me a bill, as they hadn&#x27;t sent one since Decemember of 2015. The tenants had racked up a 800$ water bill in one quarter, all on me now.<p>This is just one case. I have a friend who owns and rents 8 properties in the same city. All of his tenants have been there for pretty much their life. Pay on time no damages, some light maintenance etc. All in it&#x27;s a positive income with just the effort of mowing lawn, and occasional repairs. He bought his homes at foreclosure and repaired them. He was in his 50s and had been a contractor a majority of his life.<p>This brings up another ancedote. In my state you have to be licensed to do pretty much any house repairs. Electrical for example, last I reviewed has a clause where if you&#x27;re not a licensed electrician doing repairs you&#x27;re liable for your entire mortgage up front. The biggest benefit of owning a home, is buying cheaper and repairing. But with all the regulation that&#x27;s pretty much out the door. How feasible is it now, to buy land and build your own house unless it&#x27;s also your career? You can redo the electrical, and leave the walls open. Then pay to have it inspected by an electrician. Then a contractor to approve the drywall patch. But that&#x27;s close to just having the electrician do it all.<p>I have another friend who bought a home 15 miles from me. Unable to rent it, unable to sell as it needs to be modernized, but not worth the money. He just accepts 1800 a month is gone into the ether. He changed jobs and now lives about 1.5 hours from the house, so it&#x27;s empty. If it&#x27;s marked as vacant there are some tax benefits, but insurance increases.<p>The only way I can even see rent as viable, is buying a home rather under market. Then repairing it and renting it out. Usually you&#x27;&#x27;re looking at a 10% cut each month for property managers. This usually entails buying a slight fixer up. Which back to regulations adds up quickly, because you&#x27;ll be lucky if you can do much of anything in your own home. You can try a FHA 203 renovation loan, but those usually use pricier contractors, and take a while.<p>I&#x27;m not saying buying is all out a bad idea. Just approach it very very very carefully. Rent is not a guarentee, selling is not a guarentee. You&#x27;re rolling the dice. I am now looking at bankruptcy or foreclosure. I have pretty much burned working in Philadelphia, turning down so many job offers. Because the salary didn&#x27;t allow for an apartment + mortgage. I know this is long winded, and sorry for the diatribe. But I want to provide insight from a negative vantage point, and what is the worst case scenario.
It’s Tough Being Over 40 in Silicon Valley
I have a tough time with these stories. Although I don’t dispute that this bias exists, I feel like the subject matter is also damaging.<p>First of all, these articles are sensational and divisive. Folks over (or nearing) middle age are sensitive. They worry about being outmoded and removed&#x2F;downsized. My hunch is that they read these articles out of fear. For younger folks, I suspect it’s reassuring to know that you have something to offer that older folks might not. So, for the publisher, these produce clicks&#x2F;views.<p>However, when you get past the personal examples of exclusion, and some of the reductive arguments (e.g. “Younger people are just smarter.”) little of this is as simple as it first seems.<p>Fact is, for a long time, older workers were less technically competent than their younger counterparts. That said, for a generation that grew up with technology, this isn’t so much the case any longer. This became painfully obvious to me, while sitting with an Apple “Genius” one day. He was very hip; however, I needed to explain to him how to use the Find function in his browser. (Seriously.)<p>Young and older people both have something to contribute. Young ones often bring new ideas and perspectives because they’ve grown up differently. They lend enthusiasm and energy that older staff sometimes don’t. Frankly, older ones often don’t want to work marathon hours (this isn’t always the case, but tends to be). That said, older workers typically bring more knowledge and experience to the table.<p>I suspect that part of the bias in favor of younger workers comes from younger business owners (common in startups). I ran into this when we started our design studio. I was 26. At the time, it was scary to hire a 50-year-old to come in, because I didn’t feel comfortable directing someone that much older than me (I probably wouldn’t have admitted this at the time).<p>Additionally, those people typically wanted to earn more—and we didn’t think we could afford them. So, we hired younger folks who worked at a lower hourly rate, but often needed an inordinate amount of training and support.<p>Were I to start that company all over again, I’d do the opposite. I’d hire more skilled people and pay more than market rate. I’d then gauge their performance, and retain&#x2F;dismiss solely based on that. In my experience, a skilled person at a higher rate of pay was always more valuable&#x2F;profitable for our company than a less-skilled worker at a lower rate of pay.<p>My point is that the companies which use age as a barometer of value are approaching HR in a flawed way. The contribution of a staff member is more important than the date on his&#x2F;her birth certificate. Meanwhile, the garment choices and pop-culture references one uses shouldn’t have any bearing on the value of the individual (unless we’re talking about a company who traffics in such matter).<p>That said, I think the real problem is the employee mindset. So long as your livelihood depends on one single organization, you put yourself at risk.<p>This is doubly-so for those who remain loyal to a company for a decade or more. HR departments are notoriously short-sighted when it comes to assessing skills. They like seeing candidates who fulfill the specific requirements of a job. Meanwhile, they often don’t understand which skills are transferrable (because they typically don’t actually understand the work&#x2F;technology).<p>So, if you’ve worked in print publishing for the past 20 years, an HR person might not hire you to work in a digital content shop. However, web technologies aren’t that hard to master. Knowing a good story, understanding what attracts an audience, and having strong people skills are all much more valuable (and difficult to learn skills). But, still, those hiring often won’t see this—which puts such a person at a disadvantage.<p>There are many reasons why running your own startup, studio, consultancy are difficult. That said, all of these pursuits force you to be nimble. Most of them also allow you to distribute your income sources among multiple groups—which builds resilience.<p>And, after you’ve done any of these things, you tend to be more employable—because you have a stronger sense of what companies need. (Additionally, those who’ve “done it on their own” often exhibit characteristics that are attractive to management—especially those whose current staff is comprised primarily of box fillers.)<p>My point here (and I know I’ve carried on) is that the age discussion is a red herring. The real matter is how one remains relevant&#x2F;valuable—regardless of age. Continual learning is a part of that. Another is one’s ability to adapt to less familiar roles (e.g., planning, sales, management, guidance). More importantly, though, no one should treat their employer as the gatekeeper to their future.<p>We’re all free agents. Some of us are mostly independent. Others play for teams. Those who play for teams should always know—and build—their value, so they don’t end up marooned.
Why I finally ditched Jira
Sean from JIRA here. We are having our ShipIt hackathon today so lots of good stuff is going on in the office and there are some cool enhancements to JIRA being demo&#x27;d right now. I stepped out for a bit to address some of the comments in Ted&#x27;s blog.<p>New tools pop up all the time and I understand the excitement that comes with finding a new one. Often these discoveries come with headlines like this that get the attention of a large audience (or target market) and then they deliver some promotion of the next big thing. That&#x27;s fine, it is communities like this that propelled JIRA to where it is today and continue to power JIRA every day. Because the community of users powers JIRA, I wanted make sure we shared some of the detail that is relevant to Ted&#x27;s post.<p>(apologies for the size of this but my hope is that it gives insight into some of the aspects of JIRA that you may not already use or know about)<p>Speed: We’ve got some major projects that will be coming online soon that will fundamentally accelerate the platform. This isn’t a small tweak, this is a core architectural improvement that will make a big difference across the board.<p>UI: Have you tried collapsible columns? This will let you minimize columns that you don’t care about so the columns you do care about are bigger. This allows you to see more of the ticket. Managing your backlog as the first column in a Kanban board is ok when there are only a few issues. But, as your backlog grows it’s hard to see many issues on the screen at one time. We’re going to let you split the areas of concern into two different screens with each one focused on the tasks at hand. The backlog is for backlog management and replenishment; the Kanban board for the engineering team to select and move the tasks through the workflow. This cuts down on noise and makes better use of your screen real estate. We are looking at a redesign for the issue view but more user research needs to be done.<p>Search: That’s good feedback…and something that we hear from customers who have larger and larger issue counts. I’d like to break your comment into two pieces, the speed piece, and the effective piece.<p>Regarding effective searches. We want to make sure everything in JIRA is searchable and JQL is super flexible. But, we also make sure it is easy to filter down to the specific information you need. In this instance, I’d advise doing the search the way that you have, and then using the status filter to separate out open issues.<p>In basic search, you can quickly toggle things on and off, for instance, issues that are resolved. This should give you the sorting functionality that you are looking for. Regarding speed. We are working hard on performance. You will see big improvements here as we keep focusing on performance over the next 6 months.<p>Organization: Epics are currently displayed as cards in Kanban and this works well for certain use-cases. However, you are correct that it makes it harder to identify what issues are associated with the epic and if used as a grouping mechanism, displaying the card on the board isn’t as useful. That’s why we will provide an option in the future to display Epics on the board, or use it as a way to group stories the way we already do it for Scrum.<p>To answer your broader question, here are a few more ways that help you organise your work in JIRA:<p>Kanplan: A great way to keep this work organized in JIRA Software is through the plan mode (aka backlog) which can be found when using scrum or the Kanplan feature in JSW Cloud. Trying to run a backlog and team task board together is distracting. Managing your backlog as the first column in a Kanban works great when there are only a few issues. But as your backlog grows it’s hard to see many issues on the screen at one time, the cards are large and the ones you care about are confined to ¼ of the page width or less, there’s so much scrolling. Splitting backlogs and tasks to two different screens with Kanplan helps- Backlog is for backlog management and replenishment and the Kanban board is for the team to select and move the tasks through the workflow. You can tab between Versions and Epics to see the progress of an Epic (e.g. number of issues, completed, unestimated, estimated, and even linked pages). Also, from this view you can mark the Epic as done and edit details. (<a href="http:&#x2F;&#x2F;blogs.atlassian.com&#x2F;2016&#x2F;03&#x2F;kanban-backlog-jira-software&#x2F;" rel="nofollow">http:&#x2F;&#x2F;blogs.atlassian.com&#x2F;2016&#x2F;03&#x2F;kanban-backlog-jira-softw...</a>)<p>Portfolio for JIRA: If you are interested in cross-team and cross-project visibility then a tool like Portfolio for JIRA would be a better solution. In about 2 minutes Portfolio automatically pulls together a view across all projects to show you the what-ifs for everything your teams are working on as you do resource planning in real time. With Portfolio for JIRA you can combine the work from multiple agile teams and roll them up into larger Initiatives. Think of Initiatives as higher-level business priorities or big projects potentially spanning multiple teams. When you look at Portfolio for JIRA you can see a visible roadmap, which can be viewed by Initiatives, Epics, and issues. And, if you want more levels, Portfolio for JIRA provides an unlimited hierarchy, so you can create levels above Initiatives and Epics to help bring organization to how your team works. Product-Level Planning- I’m not sure I fully understand what you are looking to do here. Happy to help if you want to explain more.(<a href="https:&#x2F;&#x2F;www.atlassian.com&#x2F;software&#x2F;jira&#x2F;portfolio" rel="nofollow">https:&#x2F;&#x2F;www.atlassian.com&#x2F;software&#x2F;jira&#x2F;portfolio</a>)<p>Config: JIRA Software was designed to be flexible and customizable and admins have set JIRA up to fit how their organization works. The issue here could be an administrative matter – schemes should be for global or project admins to change and it sounds like in this instance that too many people have access to these permissions. As teams grow it is important for teams to spend a little time dedicate time to the to thinking about how they want their own team to work. We could take the flexibility out but dev teams really appreciate being able to customize the way they work. In order to be a true agile tool, the setup needs to be flexible to meet the needs of many different teams. As a result, it is important to use permissions properly – this might mean that there is a council of stakeholders who go over customization asks, or there is an admin who is constantly working to improve and meet team needs.<p>Support: Based on the info we have from you the solution seems like it could be simple here: create a workflow transition from duplicate to resolved. In the current state, it looks like the workflow doesn’t have that. If that doesn’t do it, let’s work together with our support team to figure this out.<p>UX: Hell yes. We agree, killing UX friction is always important. We’ve shipped a few things recently that should help if you haven’t seen them.<p>Editable Fields: We’ve added more editable fields to tickets. Editing or updating details for an issue was not possible for most fields in the Agile detail view. Most users click through to the view issue page to edit an issue and then return to their board. This is slow, inefficient (several clicks and 2 full page loads) and context is lost during this roundtrip. (MEH!) Now you can edit right in the issued to that the user can stay in context and make quick, efficient edits. This saves time and eliminates loss of context. (Plus it means fewer tabs open in your browser.)<p>Collapsible Columns Make your screen real estate easier to manage.<p>Redesigning Agile Cards: We’ve redesigned the Agile cards so that in situations when column widths are small the cards can better communicate their content. This might happen when: Screen width is small, many columns exist or the sidebar and&#x2F;or detail view is open.<p>To help fix that we worked on the following with the redesign:<p>-Improving the visual styling with a focus on ‘glanceable’ information -Creating a density option -Updated selection, hover &amp; flag states -‘Days in column’ indicator -Improve stacking at small widths -Introduce sub-tasks attached to their parent in Kanban -Explore plan mode sub-tasks<p>-Sprint Permissions-We’ve improved sprint permissions. These can now be assigned to individuals, groups and roles and is decoupled and independent from Administer Projects permission. This enables simpler administration for group or role management e.g. ScrumMasters (often contractors) to work with teams successfully, and with minimized risk to the organization for unauthorized changes in the project. Repo Integraton: On top of JIRA UX improvements, we’ve done a lot to help take UX friction out of the interaction between the repo and issue tracker.<p>-Create Bitbucket branches from within JIRA Software- JIRA Software will automatically populate information for your new branch in Bitbucket and even suggest a branch name based off of the issue key. <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=K78Nk9kFdb0" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=K78Nk9kFdb0</a><p>-Transition issues in JIRA without leaving Bitbucket- Tickets in JIRA automatically transition when the dev merges, commits etc. This means the devs can stay in code but the project manager can see the activity and if there is a need to go back to and look at the issue and the code there are tied together automatically. <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=F_IIp4uenMw" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=F_IIp4uenMw</a><p>-Give your entire team end-to-end traceability- Track the health and status of your next release from day one of development in JIRA Software’s Release Hub. Release Hub talks to Bitbucket to ensure that done code is really done and there are no inconsistencies or launch risks prior to launch day.<a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=SRxklyR6fGw" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=SRxklyR6fGw</a><p>There is always more we want to do and we take feedback seriously. Hopefully, this helps fill in some gaps and address some of the questions raised in this thread.<p>Cheers, Sean (Disclaimer: I work on JIRA)
Ask HN: What good books have you read lately?
From a recent HN comment, a list, some authors, and some guidelines, largely based on books which radically changed my thinking.<p>Currently: Vaclav Smil, <i>Energy in World History</i> (1994), and Manfred Weissenbacher, <i>Sources of Power</i> (2009). Both detail the role and impact of energy through world history. The latter draws strongly on the first, both are exceedingly well documented. TL;DR: coal changed much, oil changed everything.<p><a href="http:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;energy-in-world-history&#x2F;oclc&#x2F;30398523&amp;referer=brief_results" rel="nofollow">http:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;energy-in-world-history&#x2F;oclc&#x2F;3...</a><p><a href="http:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;before-oil-the-ages-of-foraging-agriculture-and-coal&#x2F;oclc&#x2F;837625798&amp;referer=brief_results" rel="nofollow">http:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;before-oil-the-ages-of-foragin...</a><p><a href="http:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;oil-age-and-beyond&#x2F;oclc&#x2F;837625970&amp;referer=brief_results" rel="nofollow">http:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;oil-age-and-beyond&#x2F;oclc&#x2F;837625...</a><p>James Burke&#x27;s books <i>Connections</i> and <i>The Day the Universe Changed</i>, and their accompanying television series, were a profound introduction to the history of technology, science, ideas, and philosophy. Though 30+ years old, they remain highly current and relevant.<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;connections&#x2F;oclc&#x2F;4494136" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;connections&#x2F;oclc&#x2F;4494136</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;day-the-universe-changed&#x2F;oclc&#x2F;12049817" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;day-the-universe-changed&#x2F;oclc...</a><p>Jeremy Campbell&#x27;s <i>Grammatical Man</i> (1984) introduced the concepts of information theory and their deep, deep, deep interconnections to a tremendous number of interconnected systems, many not explored within his book. Darwin&#x27;s <i>The Origin of Species</i>, James Gleick&#x27;s <i>Chaos</i>, and many of the works of Santa Fe Institute members, including John C. Holland, J. Doyne Farmer, Geoffrey West, W. Brian Arthur, David Krakauer, and Sander van der Leeuw, continue these themes.<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;grammatical-man-information-entropy-language-and-life&#x2F;oclc&#x2F;8306673" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;grammatical-man-information-e...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;chaos-making-a-new-science&#x2F;oclc&#x2F;15366709" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;chaos-making-a-new-science&#x2F;oc...</a><p>William Ophuls&#x27; <i>Ecology and the Politics of Scarcity</i> (1977) is perhaps the best, most comprehensive, shortest, and most readable exposition of the fact, reality, dynamics, and interactions of limits on the present phase of fossil-fuel fed economic growth I&#x27;ve found. This is a book I recommend not only for the message, but the author&#x27;s clarity of thought and exposition, his meticulous research, exquisite bibliographical notes, and, given the nearly 30 years elapsed, testability numerous of his predictions, some failed, yes, others uncannily accurate. Rather more the latter. In a similar vein, William R. Catton&#x27;s <i>Overshoot</i> looks at the ecological dynamics in more depth, with much wisdom, the writings of Richard Heinberg cover the ground of limits fairly accessibly and more recently. Vaclav Smil in numerous books addresses technical factors of the profound nature of the past 250 years, and implications for the future. Meadows, et al, in <i>Limits to Growth</i> set off much of the post-1970 discussion (though they&#x27;re hardly the first to raise the question -- it dates to Seneca the Elder),<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;ecology-and-the-politics-of-scarcity-prologue-to-a-political-theory-of-the-steady-state&#x2F;oclc&#x2F;2524932" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;ecology-and-the-politics-of-s...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;overshoot-the-ecological-basis-of-revolutionary-change&#x2F;oclc&#x2F;6195764" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;overshoot-the-ecological-basi...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;search?q=au:heinberg,+richard&amp;amp;qt=owc_search" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;search?q=au:heinberg,+richard&amp;a...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;search?q=au:smil,+vaclav&amp;amp;qt=results_page" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;search?q=au:smil,+vaclav&amp;amp;qt...</a><p>Though hardly pessimistic, Daniel Yergin&#x27;s book <i>The Prize</i> (and TV series) impressed upon me more than any other just <i>how much</i> petroleum specifically changed and transformed the modern world. Though intended largely as laudetory and championing the oil industry by the author, my read of it was exceptionally cautionary. The impacts on business, everyday life, politics, wars, industry, and transport, and the rate at which they occurred, are simply staggering. You can continue this exploration in Vaclav Smil&#x27;s <i>Energy in World History</i> (1994) (I&#x27;ve recommended Smil independently elsewhere), and a rare but profound two-volume set I&#x27;m currently reading, Manfred Weissenbacher&#x27;s <i>Sources of Power: How energy forges human history</i> (2009). The shear physicality of this book speaks to the message -- it&#x27;s divided into five parts: 1) Foraging Age (6 pages), 2) Agricultural Age (156 pp), 3) Coal Age (160 pp), 4) Oil Age (296 pp), and 5) Beyond the Oil Age (142 pp). That is, the ~2 million years of pre-agricultural existence are little more than a footnote, the 8,000 years of agriculture roughly equal to the 150 years of coal, and the 100 years of petroleum use roughly twice either. The oil and post-oil ages comprise their own volume. Yergin followed up with <i>The Quest</i>, continuing the search for oil, though I&#x27;ve been less impressed by it.<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;prize-the-epic-quest-for-oil-money-and-power&#x2F;oclc&#x2F;22381448" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;prize-the-epic-quest-for-oil-...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;energy-in-world-history&#x2F;oclc&#x2F;30398523" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;energy-in-world-history&#x2F;oclc&#x2F;...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;before-oil-the-ages-of-foraging-agriculture-and-coal&#x2F;oclc&#x2F;837625798" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;before-oil-the-ages-of-foragi...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;oil-age-and-beyond&#x2F;oclc&#x2F;837625970" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;oil-age-and-beyond&#x2F;oclc&#x2F;83762...</a><p>Adam Smith&#x27;s <i>An Inquiry into the Nature and Causes of the Wealth of Nations</i> is among the most-cited (and most <i>incorrectly</i> cited), least-read books of high influence I&#x27;m aware of, outside religious texts (and perhaps it <i>is</i> a religious text to some…). The author&#x27;s message has been exceptionally shaped and manipulated by a powerful set of forces, quite often utterly misrepresenting Smith&#x27;s original intent. Reading him in his own words, yourself, is strongly recommended. I&#x27;d also recommend scholarship particularly by Emma Rothschild and Gavin Kennedy, though also others, on Smith. Contrast with the portrayal by the propaganda disinformation front of the Mont Pelerin Society &#x2F; Atlas Network &#x2F; so-called Foundation for Economic Education, and much of the modern American Libertarian movement (von Mises, Hayek, Friedman, Hazlett, Rothbard, and more recently, Norberg). Contrast <i>The Invisible Hand</i> (1964), a compilation of essays published by Libertarian house Regnery Press in 1966, at the beginning of the rise in public use of Smith&#x27;s metaphor to indictate <i>mechanism</i> rather than <i>an expression of the unknown</i>.<p>There are numerous editions of Smith, I believe the Glasgow is frequently cited by Smith scholars: <a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;glasgow-edition-of-the-works-and-correspondence-of-adam-smith-2-an-inquiry-into-the-nature-and-causes-of-the-wealth-of-nations-vol-1&#x2F;oclc&#x2F;832488566" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;glasgow-edition-of-the-works-...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;economic-sentiments-adam-smith-condorcet-and-the-enlightenment&#x2F;oclc&#x2F;45282974" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;economic-sentiments-adam-smit...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;adam-smith-and-the-invisible-hand&#x2F;oclc&#x2F;820387997" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;adam-smith-and-the-invisible-...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;adam-smiths-lost-legacy&#x2F;oclc&#x2F;56598640" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;adam-smiths-lost-legacy&#x2F;oclc&#x2F;...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;invisible-hand-a-collection-of-essays-on-the-economic-philosophy-of-free-enterprise&#x2F;oclc&#x2F;326622" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;invisible-hand-a-collection-o...</a><p>I&#x27;d like to put in recommendations on technology specifically, but am still searching for a good general text. The material&#x27;s covered somewhat in the chaos and complexity recommendations above (Campbell et al), though I&#x27;d add Joseph Tainter&#x27;s <i>The Collapse of Complex Societies</i>. Charle&#x27;s Perrow has several excellent books including <i>Normal Accidents</i> and <i>Organizing America</i>. I&#x27;d like to reference something concerning Unix, Linux, and programming, perhaps Kernighan and Pike&#x27;s <i>The Unix Programming Environment</i>, Linus Torvalds&#x27; <i>Just for Fun</i>, Richard Stallman&#x27;s <i>The GNU Manifesto</i>, and Steve McConnel&#x27;s <i>Code Complete</i>. The O&#x27;Reilly book <i>Unix Power Tools</i> also encapsulates much the strength of the Unix toolset. All these are somewhat dated.<p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;collapse-of-complex-societies&#x2F;oclc&#x2F;15083222" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;collapse-of-complex-societies...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;normal-accidents-living-with-high-risk-technologies&#x2F;oclc&#x2F;10229932" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;normal-accidents-living-with-...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;organizing-america-wealth-power-and-the-origins-of-corporate-capitalism&#x2F;oclc&#x2F;939707157" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;organizing-america-wealth-pow...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;unix-programming-environment&#x2F;oclc&#x2F;10269821" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;unix-programming-environment&#x2F;...</a><p><a href="https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;unix-power-tools&#x2F;oclc&#x2F;52381684" rel="nofollow">https:&#x2F;&#x2F;www.worldcat.org&#x2F;title&#x2F;unix-power-tools&#x2F;oclc&#x2F;5238168...</a>
Ask HN: How can I become a proper Project Manager from a Programmer?
I&#x27;m in the process of making this transition. One of the most important things I&#x27;m trying to hold on to is a passion for programming that goes beyond writing code. I see my new role as leveraging the 13+ years of professional development I&#x27;ve invested in by providing my team with guidance and critical feedback while ensuring business takes on less risk by making sure the software my team produces follows industry state-of-the-art practices and meets our deadlines. I tend to look at software development as an ongoing process and I hack the process itself to get the results I want rather than relying on my own immediate intuitions and knowledge about programming. In essence I set up the guidelines and processes that let my team be the best developers they can be and help them any way I can.<p>The other face of my role is being the intermediary between the stake holders and the team. While I am not a licensed engineer I try to behave as though I were: if I let bad code slide into production I might lose my license. The stake holders I collaborate with know this and they agree its an important position. However the business wouldn&#x27;t move forward quickly if we were developing software like NASA. Instead I imagine there is an actuary assessing my technical decisions who will increase my insurance rates if I make poor decisions or lower them if I make good ones. This creates a risk-vs-reward balance that I need to consider when making decisions on behalf of the business... it might be worthwhile to accept some risk in choosing a software platform my imaginary actuary would asses as risky for the sake of the team who are familiar with it most. While I might enforce certain practices that will lower my rates like extensive fuzzing tests and formal specifications of critical components. It requires some balancing between correctness vs agility. While I don&#x27;t see the two as mutually exclusive (in fact designing for correctness tends to make a team more agile in the long term) there is some give and take in terms of timelines and budgets that I need to be aware of. I act as the buffer between these concerns and the rest of the team whose focus should be on making software that fulfills our requirements and intentions. I attend the meetings and negotiate the timelines and handle the interactions with the rest of the business.<p>I think it&#x27;s crucial for a manager of developers to have been a successful developer themselves with a wide range of experience. It&#x27;s equally important to gain the trust of the team in your technical acumen: mandatory code reviews have been an invaluable tool in my experience so far. If you can give a good review it demonstrates your knowledge and wisdom while giving someone the opportunity to learn something new. It also lets you become familiar with everyone&#x27;s skill level which is invaluable when trying to give estimates and quotes to stakeholders. Get out there and get some public credibility by contributing to open source projects, speaking at conferences, and even try your hand at writing. I&#x27;ve spoken at several conferences, have been a technical reviewer on a published book, and have contributed to the WebGL spec, Mozilla Firefox, Openstack, Python, and others. I&#x27;ll be giving a talk at a local JS conference later this year. It will help if you can develop a reputation as someone trustworthy and wise.<p>Most of all... and this is universal; be aware of what you don&#x27;t know. I&#x27;ve led teams in the past in the role of senior&#x2F;principle&#x2F;etc developer but I&#x27;ve always been sheltered from budgets, timelines, product scope, etc. I&#x27;ve been asked to interview people for positions but I&#x27;ve never been in charge of setting the policy on how we hire. I&#x27;ve learned how to adapt to social situations but leading people, especially creative and talented people, is like herding cats (I&#x27;m also quite introverted so it takes extra effort on my part to keep up). When you don&#x27;t know how to deal with these things be honest and develop a plan of action to fill those gaps.<p>For me that means contacting people who I know are great managers (and not necessarily managers of technical teams either: one friend is a genius at running her fathers restaurant and her advice has been invaluable). It also means I keep a log of terms and advice I&#x27;ve been given that I don&#x27;t understand. I use this log to do keyword searches and find books and blogs on the topics I&#x27;m missing out on.<p>Just keep at it and learn to look at the process of making software as one big software system itself. It can be quite entertaining and interesting.
Why are Adults so busy?
I read this and I think that it mostly sounds like a list of self-created problems, largely arising from the author&#x27;s vanity and poor impulse control, that need to be solved.<p>The crucial #1 thing that&#x27;s <i>entirely missing from this list</i>, though, is cultivating personal connections with other people. That&#x27;s the thing about your life that most strongly determines your happiness, as well as your professional success, your educational opportunities, and your love life (obviously). And it&#x27;s not just a matter of taking care of dependents (#13), because it involves people who aren&#x27;t your dependents, too. This list implicitly relegates that to last place, behind ironing your shirts, fixing dents in your car, and fucking off at the gym. Don&#x27;t do that. Spend time on connecting with other people <i>every day</i>. Help them. Listen to them. Work with them. Play with them. If you don&#x27;t, your life is going to be terrible, no matter how much money you have, how clean your toilet is, or how up-to-date your sound system is.<p>I&#x27;m not speaking theoretically here, although there&#x27;s lots of psychological theory backing up what I&#x27;m saying. I knew a number of people who killed themselves. Don&#x27;t be the next one. Connect. On your deathbed you aren&#x27;t going to wish you&#x27;d owned a more complete set of cutlery.<p>And this is, in my experience, the biggest difference from childhood — you have to <i>deliberately</i> connect with people, because it doesn&#x27;t happen by default. As a child, your parents will invite people to your birthday party. As an adult, if you don&#x27;t invite people to your birthday party, you won&#x27;t have a birthday party. This is more an opportunity than a burden, because it means <i>you choose who to create connections with</i>.<p>What about the rest of this stupid list?<p>I&#x27;m going to mention my life from time to time in what&#x27;s below, but I don&#x27;t want you to get the wrong idea. I&#x27;m far from a paragon, of self-discipline or anything else. I&#x27;m not holding my life up as a model to be emulated, although it does reflect my own values to some extent. I&#x27;m saying, if even I can do this shit, any fucking idiot can do it. Probably you can do better.<p>1. &quot;Have money&quot;. Okay, I like to have money. I like to work, too. I&#x27;ve paid my own way (and sometimes some other people&#x27;s ways) since I was 18; I&#x27;ve been lucky in that this has been a lot easier for me than for most people. But honestly, I haven&#x27;t been shallow enough to equate financial independence with adulthood since I was about 12. Housewives aren&#x27;t adults? But I was an adult when I was paying my rent by working at Taco Bell? Please.<p>How much you have to work to make a living has a lot to do with how much you spend and how much you can bill. Financially independent people&#x27;s spending, even in the US, varies from a few thousand dollars a year up to hundreds of thousands, without even getting into the super-wealthy. Workers get paid anywhere from US$10 per hour (or even less for prisoners and illegal aliens) up to US$1000 per hour or more. Outside the US, the variation is even greater.<p>Don&#x27;t tell me that across this entire range of expenses and earnings you have to work &quot;40–60 hours a week&quot; in order to have money. You work that number of hours because it&#x27;s the norm, not because your living expenses magically adjust to match your earnings within 15%. Then, you spend however much you make, instead of what you need, because you&#x27;re a fucking idiot. I&#x27;ve been there too, man. It sucks. But you can stop doing that, unless you&#x27;re at the bottom of the income distribution or have extra expenses.<p>2. &quot;Cooking.&quot; You can totally cook if you want to. Cooking is an enjoyable activity, and feeding people even more so. But you definitely don&#x27;t have to spend half an hour cooking breakfast every day if you&#x27;re short on time. Boil half a dozen eggs on Sunday night, have an egg and a banana for breakfast each morning. Make a casserole on Saturday, eat slices for dinner all week. Have peanut-butter sandwiches and salad for lunch. Cook dinner for yourself, a partner, and two friends; then you only cook dinner one out of every four times, unless you somehow get slotted into a housewife role. Boil eggs, chopped onion, and cheese in a Ziploc bag to make a Ziploc omelet. Chop vegetables on Tuesday night and use the vegetables in food for the rest of the week. Use dried onion and garlic in bottles. Make three liters of cooked rice and eat from it the rest of the week, or use a rice cooker. Keep a seasoned salt mix in a shaker. Keep oils for cooking in squirt bottles with conical nozzles next to the stove. Make a big batch of curry, pack it into a dozen big Ziploc bags, and freeze them all with separators in between.<p>My breakfast this morning was canned mackerel, wheat crackers, and a peanut bar; lunch was instant noodles. (I&#x27;m trying not eating after midday this week, although I&#x27;m buying the food rather than begging for it in the street in the traditional way.)<p>3. &quot;Laundry.&quot; Don&#x27;t dry-clean your own clothes, as the article bizarrely suggests; that&#x27;s dangerous enough to outsource to a specialized company. Don&#x27;t fold your clothes except on special occasions; wear knits instead. Wear your clothes twice before washing. Wear flip-flops instead of socks. Synthetics dry faster, but I can&#x27;t wear them more than once; I can wear silk, wool, or cotton twice. You can get laundry down to 20 minutes a week (per person) if you have a washing machine.<p>4. &quot;Cleaning.&quot; This is largely a matter of how much living space you have, although yeah, I probably spend a few hours a week washing dishes. When my then wife and I lived in a van, we sure as hell didn&#x27;t spend 5 hours a week each cleaning it. We probably didn&#x27;t spend two hours a week between the two of us. (We did have to spend a lot of hours fixing it, though...)<p>5. &quot;Buying stuff.&quot; This is the only one on the list that actually saves you time — buying bookshelves is a hell of a lot faster than making them, not to mention toothpaste. But I&#x27;ve still wasted a terrible fraction of my life on it. You can reduce the time you waste on buying things by buying in bulk, often by buying online, by buying things that last, and by possessing less. (If you are poor, you may find yourself obliged to possess a great many things, just in case; but if you are not poor, you can take advantage of the opportunity afforded by your money to only buy the things you do need.)<p>6. &quot;Bills.&quot; My roommate and I have six bills: rent, internet+telephone, gas, electricity, water, and property tax. Most of these come once every two months. (The property tax we can pay yearly.) It definitely doesn&#x27;t take us 16 hours every two months between the two of us to track and pay them. Don&#x27;t live alone; that&#x27;s a stupid waste of time. Of course, if you pick the wrong roommate, you could waste a lot more than 16 hours arguing... but that hasn&#x27;t been a problem.<p>7. &quot;Small errands.&quot; Yes, these can take an unbounded amount of time. Avoid them as much as possible. A lot of these come from your possessions and bills.<p>8. &quot;Transport.&quot; Yes, it&#x27;s easy to waste hours a day on transport (to say nothing of the time you work to pay off a car loan). I live 20 minutes away from my office by public transit or bicycle. Usually I read or write on the bus; sometimes it&#x27;s too crowded. About once a month, I have to top up the transit pass. This is a 45-second cash transaction at the ticket counter in the subway station. I should probably spend a couple of hours getting my bicycle back in working order, because it&#x27;s often more convenient.<p>It&#x27;s easy to get into a position where you&#x27;re spending two or four or six hours a day on transport, and worse, transport that consumes your entire attention. If you&#x27;re in that position, recognize that it&#x27;s an urgent problem and you need to get out of it. Only isolation from other human beings is more psychologically damaging than long commutes, and only serious illness wastes more time.<p>9. Exercise. (The article says &quot;staying healthy&quot;, but ⓐ that&#x27;s a lost cause, you&#x27;re going to fucking die just like everybody else, and ⓑ eating is already point #2.) What the fuck is wrong with you that you drive a car to a job five days a week and then lack exercise? How about walking a little bit? I walk about a kilometer a day to get to the bus or subway, two or three kilometers most days. If I bike to work and back that&#x27;s five kilometers. Also you could work (in the traditional sense of the word). I&#x27;m sure there&#x27;s <i>something</i> in your house that could use some elbow grease. A few weeks ago I was also taking the stairs the fifty meters up to the office — this takes four minutes. I think I&#x27;ll start again tomorrow.
Ask HN: What's your favorite HN post?
This may well get buried and I had to hunt for about 10 min to find it. This post from 3,127 days by fiaz is my all-time favorite HN comment. And I&#x27;ve read hundreds of thousands of comments.<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=121413" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=121413</a><p>&gt; fiaz 3127 days ago | parent | favorite | on: Ask News.YC: How to re-motivate yourself?<p>APOLOGIES for making this post so annoyingly long, but I really hope you find value in the words below. -----------------------------------------------------------------------<p>I&#x27;m going to first share a personal experience from my early trading days to illustrate where I&#x27;m coming from. I used to wake up at 4:30 am everyday in the Chicago suburbs to beat rush hour traffic and make it into downtown Chicago at 6:30 am. In order to wake up so early, I fell into a habit of sleeping at 9:00 pm and like a robot waking up at 4:30 am. This simple routine was indirectly helpful when things seemed darkest.<p>For the first six months, I lost money and was ridiculed constantly by other traders who were more successful than me (which was about 20 other guys CONSTANTLY using me as a punching&#x2F;whipping bag). The only thing that kept me going was the fact that some of the very same traders that would be making wise cracks at me for losing money were some of the most successful people I knew at the time. For better or worse, if I needed a trader to model myself after, it was the same people that were telling me how bad a trader I was - and although I was not open to really hear what they were saying, they were right about my skills in every way (but their feedback was always packaged in some sort of insult).<p>After racking up some rather hefty losses, I was determined to quit at one point during month four, but because I had a habit of waking up at 4:30 am I simply &quot;forgot&quot; that the night before I told myself I would quit and spare myself further humiliation...by then I was warned that I was now on the red list of traders ready to be cut. Also, my personal savings were starting to approach zero (the base &quot;draw&quot; for house traders was enough to pay for food; you usually make your money on a percentage of your profits, and I was deep in the red at the time).<p>To say the least, there were many excellent reasons to be &quot;reasonable&quot;, forget about my dreams, and quit. After 4 consecutive &quot;failures to quit&quot;, I realized that I didn&#x27;t quit because somewhere deep down I was hanging on to a dream, however remote at that point: that I could somehow be as successful as the other traders that I knew. At the same time I realized that I had hit rock bottom in that I couldn&#x27;t even succeed in failing! Very tough times indeed...<p>An interesting point to note here is that although my losses were starting to get very large, the people who were funding me as a trader kept me because I had one redeeming quality: EFFORT, and this helped build tenacity. Other traders who barely traded but had a fraction of my losses were cut much faster because they didn&#x27;t put forth much effort. They were not willing to take losses and be bold&#x2F;brave and fight it out; I was willing to take risks, and this saved me from getting cut faster than others.<p>Slowly I began to reinterpret the constant humiliation I was suffering: perhaps the other traders were right about their &quot;jokes&quot; and there might be something in what they are saying that will help me get out of the red. I also realized that since I had failed at quitting (which was now the ULTIMATE failure), there was no further failure for me and that if I took baby steps they were surely to succeed (this translated into taking smaller trades&#x2F;profits).<p>Only after improving upon my abilities as a trader and channeling my energies appropriately did I succeed and earn everybody&#x27;s respect as a trader (and you have no idea how this made me feel!). I quickly made enough in commissions to be trading my own account, and be successful as an independent trader onward. When I look back at those final months of 1999 (yeah that&#x27;s right, I was losing huge cash at the end of 1999 when the entire market was going crazy UP!), there was more good than bad even when I was getting my ass handed to me. It&#x27;s just that I was intentionally creating my own feedback (I&#x27;m right everybody else is wrong) instead of seeing the results I was getting (losses&#x2F;insults) as feedback and information that would help me be successful.<p>I kind of snicker every time I see somebody ask for feedback on their startup on YC.News only to end up justifying themselves by telling everybody why they did what they did when they get negative feedback, which is the feedback of greatest value. If somebody tells you how crappy your idea is, thank them that they even spent a few brain cycles considering your idea.<p>The lessons I learned from this that are perhaps relevant to your questions:<p>- Determine if you believe in yourself to succeed as an individual (I know this sounds odd, but for a moment just examine your thought patterns and your actions and see what message you are sending to yourself; do you listen to the voice that says you can&#x27;t or are you paying attention to the feedback from your efforts and the results you are getting?)<p>- Search deep down inside and see if the project you are working on is something you believe in or not. If you can&#x27;t sell yourself, then you shouldn&#x27;t bother trying any further...<p>- ANY attention you get for your efforts is good attention. If you get LOTS of negative feedback, then be grateful - you&#x27;ve jumped the first hurdle of getting people to give a damn about what you are doing! :)<p>- There is responsibility and accountability that goes with both success and failure. You need to be ready for both because they can be equally painful in equal ways. The amount of accountability that comes with success can be more unbearable than the accountability that accompanies failure. I personally know of some very talented people who enjoyed phenomenal initial success only to find just as fast that they were in over their heads.<p>- The more you resist the possibility of failure then you are less likely to recognize possibilities that will help you succeed. If you are afraid to fail, then most certainly you are afraid to succeed. This sounds counterintuitive but it&#x27;s based upon the fact that fear makes your mind less supple and less responsive to the changes that will push you out of the game - or conversely it will lessen the impulse to jump on the opportunities you need to succeed.<p>- The results you get has everything to do with your users&#x2F;market and less to do with you as an individual; it&#x27;s sometimes hard to separate these two. See the other side of the equation and what side you are on before trying to solve it. Don&#x27;t ever think you are above the feedback of your users...EVER!<p>- Don&#x27;t have expectations (this is just setting yourself up for failure). Because you are starting out you may not know what is best to help you succeed - ESPECIALLY if you&#x27;re lacking motivation. Keep in mind that whatever results you get from your efforts will lead to more possibilities (in the form of additional information).<p>- Have some behavioral &quot;context&quot; within which to exercise discipline and structure. Seek to grow your efforts within this context. My context was my sleep schedule. It was a routine that was so ingrained that my drive had a laser focus. This might not work for some, but it worked for me.<p>Finally, I will add that in my opinion failing hard and fast is MUCH better than failing slowly. The faster you know for certain something isn&#x27;t going to work out, the sooner you can cut your losses and move on to your next idea. When you eventually succeed, you will look back at all the times you were quick to cut your losses and get to where you are...<p>---------------------------------------------------------<p>Please do NOT contact me asking for advice in trading&#x2F;investing. This is a VERY personal thing, and it has everything to do with who you are, NOT with how much information you have, or which tools you use, or who you know.
Show HN: A minimalistic cloud provider
Not sure if this is helpful, but here&#x27;s some output from one of their 4 USD&#x2F;month servers:<p><pre><code> root@localhost:~# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 1 On-line CPU(s) list: 0 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 2 Model name: QEMU Virtual CPU version 2.1.2 Stepping: 3 CPU MHz: 1899.998 BogoMIPS: 3799.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 4096K NUMA node0 CPU(s): 0 Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni cx16 popcnt hypervisor lahf_lm abm root@localhost:~# sudo lshw -short H&#x2F;W path Device Class Description ================================================ system Bochs &#x2F;0 bus Motherboard &#x2F;0&#x2F;0 memory 96KiB BIOS &#x2F;0&#x2F;401 processor QEMU Virtual CPU version 2.1.2 &#x2F;0&#x2F;1000 memory 512MiB System Memory &#x2F;0&#x2F;1000&#x2F;0 memory 512MiB DIMM RAM &#x2F;0&#x2F;100 bridge 440FX - 82441FX PMC [Natoma] &#x2F;0&#x2F;100&#x2F;1 bridge 82371SB PIIX3 ISA [Natoma&#x2F;Triton II] &#x2F;0&#x2F;100&#x2F;1.1 storage 82371SB PIIX3 IDE [Natoma&#x2F;Triton II] &#x2F;0&#x2F;100&#x2F;1.2 bus 82371SB PIIX3 USB [Natoma&#x2F;Triton II] &#x2F;0&#x2F;100&#x2F;1.2&#x2F;1 usb1 bus UHCI Host Controller &#x2F;0&#x2F;100&#x2F;1.3 bridge 82371AB&#x2F;EB&#x2F;MB PIIX4 ACPI &#x2F;0&#x2F;100&#x2F;2 display GD 5446 &#x2F;0&#x2F;100&#x2F;3 ens3 network Virtio network device &#x2F;0&#x2F;100&#x2F;4 storage Virtio block device &#x2F;0&#x2F;100&#x2F;5 generic Virtio memory balloon &#x2F;0&#x2F;1 scsi1 storage &#x2F;0&#x2F;1&#x2F;0.0.0 &#x2F;dev&#x2F;cdrom disk DVD reader </code></pre> The RAM is ECC, according to the &quot;dmidecode -t memory&quot; command.<p><pre><code> root@localhost:~# dd if=&#x2F;dev&#x2F;zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.5635 s, 85.5 MB&#x2F;s</code></pre>
Ask HN: Is it possible for someone to not be cut out for software engineering?
I&#x27;ll try to answer your question and provide advice on each of your points but to your broader question, <i>yes, absolutely</i> someone can be not cut out for software engineering. The people I know who were &quot;not cut out for it&quot; usually went to school for CS because &quot;that&#x27;s where the jobs are&quot; but it wasn&#x27;t necessarily where their passions were. Several of them ended up in tech, but often on the sales or management side, not in software development. Really, if you enjoy it and want to do it, you can <i>become</i> cut out for it. You&#x27;ll simply learn what you need to learn to become successful.<p><i>- Working harder than others in college.</i> I don&#x27;t find this to be a particularly useful guide for anything other than your work ethic. You had a respectable GPA and were willing to spend 60 to 80 hours per week on schoolwork. So what if someone was able to eek by more easily? Maybe they had a leg up on you and got into it earlier. Maybe they wouldn&#x27;t have put in that time if they needed to.<p><i>- Rejected interviews &#x2F; performance anxiety &#x2F; bad at algorithms</i> I can&#x27;t speak to working for &quot;the big 4&quot;. I haven&#x27;t had any interest in doing that, myself. There&#x27;s plenty of great jobs out there at smaller companies (including startups). If you know you&#x27;re bad at algorithms, start studying. Grab the ones from interviews that you&#x27;ve been bad at and learn them up and down. Write example libraries of each in a few programming languages. <i>Put them in a GitHub&#x2F;GitLab&#x2F;BitBucket repository.</i> For the languages you&#x27;re targeting jobs in, find an open source project in the language that has a good community and participate. Find a library you wish you had and write it. <i>Put them in a repository.</i><p>Failing at interviewing at a top-tier company may be an indicator that you&#x27;re not cut out to work at one of those top-tier companies. This sounds worse than it is. Maybe you wouldn&#x27;t thrive in that environment as much as you&#x27;d like to thrive? Wouldn&#x27;t you be happier with a job at a smaller outfit where you can grow, or maybe just a <i>different</i> outfit? If Google&#x2F;Facebook&#x2F;whomever else you consider top tier is proving too difficult to get in to, look elsewhere.<p>I mentioned the whole repo thing and this is advice you&#x27;ll find all over Hacker News and elsewhere. It&#x27;s not an industry fairy tail ... it works. In your <i>cover letter</i>, specifically mention your experiences with the languages they&#x27;re looking for and link to relevant projects. In your resume, provide a link to your GitHub ID or relevant ID on another site that has a list of your projects. If you&#x27;re lucky, they&#x27;ll have already looked over some of your code. But don&#x27;t count on it.<p>Having that available gives you the opportunity to creatively <i>deflect</i> those technical questions. Remember, they&#x27;re asking you to demonstrate your knowledge. Just because they gave you a whiteboard doesn&#x27;t mean you can&#x27;t demonstrate it differently. Three years ago I took an interview[0] and was asked something algorithmic around multi-threading. I had written a library in a private repo that handled message passing between two applications running on the same machine in a thread-safe manner and was directed to the white board. I said &quot;I can do you one better&quot;[1] and mentioned a library I had written for thread-safe in-memory message passing between two applications running on the same machine where I not only had to solve that problem, but had to do so very performantly and had to address a number of other corner cases. The interviewer let me log into my BitBucket account, plugged his laptop into the meeting room&#x27;s TV and after a quick apology about the code quality (it was actually pretty good, but not perfect which was why it wasn&#x27;t public, yet), I showed him the solution. The upshot was that the <i>entire rest of the interview</i> was me walking through this code[2]. Why did you use a Mutex there? Why a ManualReset there but an AutoReset there? It was a <i>lot</i> of fun.<p><i>1)</i> Yes. When the benefit of getting into a top tier company outweighs the grief in trying to get there. Maybe you&#x27;re there, maybe not? Ask yourself <i>why</i> you&#x27;re focusing on these specific top-tier companies and find out if there&#x27;s a company not currently in your list that may fit those criteria would be my only advice here.<p><i>2)</i> You&#x27;re already doing this. Post mortems are a good idea. Another thing you can do is join some meet-up groups that have professionals in the parts of the industry you&#x27;re trying to get into. After you get to know some people, you&#x27;ll find folks who do regular interviews. Ask them to help you. I used to give a <i>lot</i> of interviews and I have volunteered for interview prep many times. A lot of anxiety around interviewing comes from social anxieties in general. Joining a meet-up group will give you practice at introducing yourself and making a good first impression. Walking up to random strangers at the super-market and striking up a conversation works, too (I&#x27;ve done this as practice, my self).<p><i>3)</i> My last two answers are my best advice. A mentor would be helpful, but it doesn&#x27;t have to be such a formal mentor&#x2F;mentee relationship. Get into some user groups&#x2F;meetups and meet others in software development who are where you want to be. Make friends and those friends will become your mentors by default if you&#x27;re willing to seek advice, ask for help, and accept hard observations you may not want to hear.<p>And, most of all, hang in there. It sounds like you really <i>like doing this stuff</i> and <i>want to do it</i>. You&#x27;re already ahead of most of the people I used to interview. Granted, it wasn&#x27;t at a top-tier company (though we were a huge internet company) and it wasn&#x27;t for the sexiest of development jobs (because that kind of attitude would have had you hired pretty easily if you were even close to qualified where I was at).<p>[0] This interview found me. I wasn&#x27;t looking at the time but my dad&#x27;s advice of &quot;never turn down an interesting interview&#x2F;opportunity&quot; stuck in my head, so I was <i>very</i> casual in this interview. That turned out to work in my favor for whatever reason and I ended up being offered the position at a salary figure I had never expected to get. I didn&#x27;t take the job because it required moving out of state and that wasn&#x27;t an option for me at that time.<p>[1] This sounds really arrogant and I&#x27;m embarrassed to say that those were <i>my exact words</i>. It could have <i>easily</i> been off-putting to the interviewer and I knew that, but because of the last footnote, I was overly casual and confident (if I didn&#x27;t get the job offer, who cares, I probably can&#x27;t take it anyway!). The funny thing was, this group of people went from extremely formal in the beginning to casual by the end. I felt like we were having a discussion like I&#x27;d have with other developers over beer, not like I was having my knowledge put to the test and when the &quot;thanks&#x2F;hand-shakes&quot; happened at the end, one of the guys said something along the lines of &quot;Thanks for your time, I really enjoyed this interview&quot; to me, which stood out since I can&#x27;t remember an interview experience that equalled it.<p>[2] I picked a perfect library and I ended up using this library in two other interviews as example code. It was a tricky bit of logic where you had two applications, each in different security domains with both responsible for processing some data and one responsible for requesting and writing the data. They used MemoryMappedFiles to share the data between them and had to manage situations where either side may not be in the position to be able to receive the data, so it covered a number of scenarios neatly in one library and made message passing with these odd requirements a simple matter of a few lines of code wrapped in whatever threading construct one wished to use.
Ask HN: Is it possible for someone to not be cut out for software engineering?
Disclaimer: I have never worked for a Big 4 company, and the only interviews I had with them were during college with Amazon, and I&#x27;ve failed both of those, either in the 1st or the 2nd round. So I&#x27;m not a success story. That being said, if I were to set this as a goal, I know what I would do.<p>&gt; Is it possible for someone to not be cut out for software engineering?<p>The answer to such a question is always &#x27;yes&#x27;. But there&#x27;s a lot of danger in assuming that you are the someone. This is betting against yourself. And there&#x27;s only one of you in the current reality state. Don&#x27;t bet against yourself. That is not how you should think.<p>Definitely don&#x27;t bet yourself in this instance if you already have experience successfully getting multiple jobs...<p>What you should investigate instead is what do you want, and how much you want it. Consider the various pros and cons and how they make sense to you. How much do you really want to work for a Big 4 company? How do you feel about some of the potential tradeoffs (i.e., time spent on learning algorithms and interview questions)? Same with the software engineer question. How much do you want to be one? Why? What are the tradeoffs?<p>[Note: not all tradeoffs are &quot;true&quot; tradeoffs, i.e., that you&#x27;ll loose something. Learning algorithms may make your mind sharper and help you in other areas. But it also means you can&#x27;t spend that time on, say, relationships, entertainment&#x2F;hobby, or even something in the health department. There&#x27;s nothing wrong with tradeoffs and don&#x27;t scrutinize them too much but still be aware that nothing you do is free]<p>The problem with the question of &quot;am I not cut out to be a software engineer &#x2F; Big 4 employee&quot; is that no one can answer it, including you. You will, most likely, never ever know unless you reach some success point where you can definitively say yes. You can&#x27;t just base it on things like being rejected by many companies or struggling in college, because that already implies those are reliable proxies and that&#x27;s a really shaky assumption. I had trouble in college, too. I graduated with a 3.0. It doesn&#x27;t seem to mean a thing, other than what it literally means.<p>&gt; So far, I have interviewed for and been rejected by no less than 10 different roles. I was also rejected by approximately 20 companies during college. I always fail during tech portions.<p>This is neither here nor there. There are a lot of factors that could go into something like this, it could be way too many things. Not enough information. The only thing I&#x27;ll say is try to develop a model of what kind of companies you are not a good fit for, so that you don&#x27;t spend too much time on them, and avoid wasting too much time on unlikely pathways unless you really want to work for some specific companies. I would often apply to very few places, get offers from all of them, and then choose among that. Applying to lots of companies indiscriminately was both stressful and yielded nothing. Also, don&#x27;t be discouraged from applying to places that have requirements you don&#x27;t meet but are nonetheless interesting to you.<p>Also, field, location, frameworks, what the company lacks, how the company is doing, etc., all affect your chances.<p>&gt; performance anxiety<p>I got rid of my performance anxiety mostly through a major philosophical shift. I don&#x27;t know if this is a topic that one can give &quot;simple&quot; advice on... in the context of interviews, for any given interview, assume that you will pass it. Just assume this, without making anything depend on it being true. Any time something in the interview goes &quot;wrong&quot;, just assume it doesn&#x27;t matter. Don&#x27;t think about how you &quot;should&quot; know the answer to some question, just give your best answer or say you don&#x27;t know and move on and do not assume that this jeopardizes your interview.<p>Whether you did something &quot;wrong&quot; during the interview, you can figure that after it is over. And, remember, they&#x27;re just interviews. You do not owe to the world to pass them, they don&#x27;t say something insidious about you, you&#x27;re not a worse person for not passing one, nor are you a worse software developer for not passing them. You interviewing for your benefit, not theirs or anyone else&#x27;s.<p>&gt; If my goal isn&#x27;t an impossibility, how can I efficiently progress towards it? Would a mentor be helpful?<p>Assuming you do decide that getting into a Big 4 company is a fairly high priority goal for you (and, really, even if it&#x27;s not), the first thing I would recommend is making sure that you&#x27;re focusing about progress and results as opposed to time or work. You want the most productive results from the least amount of time and work. All work should be justified.<p>Essentially, you&#x27;re trying to learn how to solve algorithms quickly and under pressure. As with any learning tasks, this is a fairly big and complex topic that&#x27;s not well understood. This is where you want to apply your learning how to learn skills and try to pool whatever intelligence, intuition, and knowledge you currently possess. I can write, well, a lot on this topic so I&#x27;ll try to keep it relatively short: try to figure out what is needed and what is missing in your head, and try to find a way to process your learning style and what kind of things give you trouble. Grinding on a problem over and over actually probably benefits more brilliant people more than the slow ones among us, since the brilliant people can make their brain form all the connections, we actually need to trace what goes where.<p>Maybe you have a poor memory and you need to organize the algorithms you&#x27;re learning. Maybe you&#x27;re not used to writing code a certain way and you need to do that. Maybe you should take a stab at some weird language to free up your brain from misconceptions. Maybe you should play a video game to see some pattern you&#x27;ve never paid attention to before. Maybe you should get some sleep and stop worrying about things for a week. It&#x27;s a bit of a strange process at times but it&#x27;s not entirely hit or miss and if you are very attentive to your brain and you do not waste time shaming and guilting yourself, you can discover a lot of interesting things about how you work, whether or not you&#x27;ll make it into that Big 4 company.<p>I can write more about this but it&#x27;s not going to fit in an HN post.<p>Just, don&#x27;t bet against yourself.
Why We Can Send to Gmail in China
EDIT: format<p>The post only tells part of the story. There are at least two places that GFW can (and does!) block email traffic as long as the traffic goes through GFW:<p>1. By DNS poisoning your domain name. There&#x27;s nothing special, when GFW decide to DNS poisoning your domain, all query types will be poisoned, including your MX record.<p>2. By TCP reset your SMTP connection to MTA (or forge reply from the other end) if the sender or recipient is something special.<p>For #1, this is happening to my domain name yegle.net. This could be demonstrated via a DNS query sent from outside of China to <i></i>any<i></i> servers (even it&#x27;s not a DNS server) in China (see examples in the end of this comment).<p>For #2, this is happening to my gmail account. Try connect to an MTA in China from outside of China, as soon as you type &quot;MAIL FROM: MY_EMAIL_ADDRESS&quot;, you&#x27;ll get a TCP reset. (see examples in the end of this comment). If you are sending email from China to my email address, you&#x27;ll get a forged reply saying the email address doesn&#x27;t exist (again see example in the end of this comment).<p>In order to make sure you can send&#x2F;receive email from&#x2F;to China, you need to make sure the sender and receiver&#x27;s email service support [StartTLS](<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Opportunistic_TLS" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Opportunistic_TLS</a>).<p><pre><code> &#x2F;&#x2F; TEST FROM OUTSIDE OF CHINA $ dig MX yegle.net @54.222.60.218 ; &lt;&lt;&gt;&gt; DiG 9.9.5-3ubuntu0.8-Ubuntu &lt;&lt;&gt;&gt; MX yegle.net @54.222.60.218 ;; global options: +cmd ;; Got answer: ;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 9506 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;yegle.net. IN MX ;; ANSWER SECTION: yegle.net. 2654 IN A 8.7.198.45 ;; Query time: 176 msec ;; SERVER: 54.222.60.218#53(54.222.60.218) ;; WHEN: Mon Sep 19 13:13:17 PDT 2016 ;; MSG SIZE rcvd: 43 &#x2F;&#x2F; TEST FROM OUTSIDE OF CHINA $ dig MX yegle.net @8.8.8.8 ; &lt;&lt;&gt;&gt; DiG 9.9.5-3ubuntu0.8-Ubuntu &lt;&lt;&gt;&gt; MX yegle.net @8.8.8.8 ;; global options: +cmd ;; Got answer: ;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 17621 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 512 ;; QUESTION SECTION: ;yegle.net. IN MX ;; ANSWER SECTION: yegle.net. 299 IN MX 10 aspmx.l.google.com. yegle.net. 299 IN MX 20 alt1.aspmx.l.google.com. yegle.net. 299 IN MX 30 alt2.aspmx.l.google.com. yegle.net. 299 IN MX 40 aspmx2.googlemail.com. yegle.net. 299 IN MX 50 aspmx3.googlemail.com. ;; Query time: 24 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Sep 19 13:13:27 PDT 2016 ;; MSG SIZE rcvd: 171 &#x2F;&#x2F; TEST FROM OUTSIDE OF CHINA $ telnet mail.kingsoft.com 25 Trying 219.141.176.248... Connected to telecom.mail.kingsoft.com. Escape character is &#x27;^]&#x27;. 220 mail.kingsoft.com ESMTP EHLO gmail.com 250-mail.kingsoft.com 250-8BITMIME 250 SIZE 52428800 MAIL FROM: cnyegle-AT-gmail-com Connection closed by foreign host. &#x2F;&#x2F; TEST FROM INSIDE OF CHINA $ telnet aspmx.l.google.com 25 Trying 209.85.225.27... Connected to aspmx.l.google.com. Escape character is &#x27;^]&#x27;. 220 mx.google.com ESMTP u6si11379881igw.58 EHLO yegle.net 250-mx.google.com at your service, [183.151.34.162] 250-SIZE 35882577 250-8BITMIME 250-STARTTLS 250 ENHANCEDSTATUSCODES MAIL FROM:&lt;[email protected]&gt; 250 2.1.0 OK u6si11379881igw.58 RCPT TO:&lt;cnyegle-AT-gmail-com&gt; 551 User not local; please try &lt;forward-path&gt; Connection closed by foreign host.</code></pre>
Ask HN: What are the must-read books about economics/finance?
Preface: For a bit of I suppose... uhh, qualification, I took nearly every single upper division Economics class my university offered (~25). I did so because I LOVE Econ. Also, sorry for the rambling nature of this.<p>First things first, finance is only sort of economics, it&#x27;s really just finance. I&#x27;d highly recommend taking an accounting class (or book) and a grab an intro finance book. Accounting will really help with jargon, and just some really basic things (like balance sheets). Also, &quot;Security Analysis&quot; [0] is the &quot;only&quot; book you&#x27;ll ever need, Warren Buffet recommended it to Bill Gates, and now Bill Gates recommends it to everyone.<p>Back to Economics... There are two primary &quot;groups&quot; of thought... sort of like twins separated at birth who grow to hate each other.<p>---------------------------------- The First: Neoclassical Economics ----------------------------------<p>Focuses primarily on microeconomics and largely mathematical. It&#x27;s birth is largely due to Economists wanting to make econ a &quot;true science&quot; like we see the physical sciences (biology, chemistry, physics). It starts around the late 1800s and really picks up steam around the time of Einstein. Math was hot and being applied everywhere.<p>A really interesting period to research and study is right after black Tuesday (and before the great depression) and what the central bank didn&#x27;t do (before central bank intervention in markets). While I really detest the bastard, Milton Friedman&#x27;s work on monetary policy is pretty science and generally good here. [1],[2].<p>I&#x27;m a Keynesian (I suppose-- Econ gets deep fast), and so you&#x27;d be no where without reading some of what Keynes did to get our assess out of the great depression (i.e. government spending). It&#x27;s also more or less the birth of Macroeconomics... You&#x27;ll know you&#x27;re good when you laugh at forgetting: Y = C + I + G + (X - M). Some good things to get started are looking at the IS-LM [3] model and AS-AD [4] model.<p>That gets you into the 60s - 70s. Tall Paul Volker is the unsung hero of the 80s, read about him (he ran the federal reserve). After that microeconomics starts to fragment into things involving game theory and behavioral economics (Daniel Kahneman is the man).<p>Econometric analysis mathematically speaking is just multivariate regression analysis for time series or cross-sectional data. More &quot;modern&quot; analysis is probably using panel data [5] (combination of cross sectional and time series). Calculus, linear algebra, and differential equations should prepare one plenty for everything but panel data analysis. The real &quot;econ&quot; part is applying solid econ theory to the mathematics you&#x27;re using, a textbook will help [6]. For finance this is your bread and butter.<p>Game theory will apply a lot of different mathematical tools. You will need to love pure math. To really get into it requires pain or love. I like a healthy amount of both.<p>---------------------------------- The Second: Heterodox Economics ----------------------------------<p>So as it turns out, neoclassical economics is at most half of Economics. It&#x27;s really where the &quot;philosophy&quot; comes into play. You&#x27;re gonna need a quick history lesson to sort of see it&#x27;s topic matter. Economics really didn&#x27;t exist before... the 1500s. You can try to apply economics to earlier times but you could also just make shit up and post it to twitter. Both would be equally likely to contain truth.<p>Economics came into existence around the time the Dutch began developing trade routes (1550s). A by product of all this trade, is tons of cash, and goods-- currency (silver, metals, whatever) starts to actually be used in society (before that it was mostly just a status symbol). It pisses off a lot of _institutions_, most of all &quot;the church&quot; and monarchies because money is allowing people to gain power. It&#x27;s usurping power from them. This is the rise of the &quot;merchant class&quot; and now thanks to money (trade really, but whatever it&#x27;s complicated)-- people are liberating themselves from the social status they&#x27;re born into. Eventually modern republics appear, and governments form. Nations trading globally becomes more common (Dutch, English, Spanish) and we get to Adam Smith, David Ricardo [7], et. al.<p>Now it&#x27;s the 1800s. People are seeing the birth and growth of capitalism, industry, corporations, and the tumultuous death of agrarian life. Now the way the &quot;common person&quot; lives their day dramatically changing, for a few it was better for most it was worse. Some economists begin to ask why are we replacing these now defunct _institutions_ with equally shitty, or possibly shittier, ones. This is more or less becomes the birth of heterodox economics which largely studies the more abstract ideas like &quot;institutions&quot;; by it&#x27;s very nature the content tends to be philosophical.<p>By the 1920s heterodox economics is falling by the wayside. The content is less able to be tested like a physical science (i.e. no math&#x2F;stats); so, it&#x27;s treated like a misbegotten child... By the 1950s heterodox content was marginal at best-- the cold war and fear of communism made (makes) people insane. Economists pretty much had to be pro-capitalism or face being called &quot;commies&quot; and thrown in jail or worse being a narc in a witch hunt. This was more or less the nail in the coffin in mainstream heterodox economics (at least for research in the Occident). After the cold-war ended the nail got pulled out, but I wouldn&#x27;t say it&#x27;s really outta the coffin yet.<p>This book [8] isn&#x27;t great but it&#x27;s quickly digestible and will point you in the appropriate directions.<p>---------------------------------- ----------------------------------<p>Some Rambling to Finish<p>I&#x27;d highly recommend not just learning how to use the tools, but why we have them and where they came from. Economics is vastly deeper than the average person will ever know. That depth is greatly empowering and guiding when using its lenses to see and solve problems. One last thing, know there&#x27;s no going back, you will see the world differently.<p>[0] <a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Security-Analysis-Foreword-Buffett-Editions&#x2F;dp&#x2F;0071592539" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Security-Analysis-Foreword-Buffett-Ed...</a><p>[1] &quot;The Role of Monetary Policy.&quot; American Economic Review, Vol. 58, No. 1 (Mar., 1968), pp. 1–17 JSTOR presidential address to American Economics Association<p>[2] &quot;Inflation and Unemployment: Nobel lecture&quot;, 1977, Journal of Political Economy. Vol. 85, pp. 451–72. JSTOR<p>[3] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;IS–LM_model" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;IS–LM_model</a><p>[4] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;AD–AS_model" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;AD–AS_model</a><p>[5] The course I took on panel data, <a href="http:&#x2F;&#x2F;web.pdx.edu&#x2F;~crkl&#x2F;ec510&#x2F;ec510-PD.htm" rel="nofollow">http:&#x2F;&#x2F;web.pdx.edu&#x2F;~crkl&#x2F;ec510&#x2F;ec510-PD.htm</a><p>[6] <a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Using-Econometrics-Practical-Addison-Wesley-Economics&#x2F;dp&#x2F;0131367730" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Using-Econometrics-Practical-Addison-...</a><p>[7] He more or less invented trade theory (competitive advantage) <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;David_Ricardo" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;David_Ricardo</a><p>[8] <a href="https:&#x2F;&#x2F;www.amazon.com&#x2F;Age-Economist-9th-Daniel-Fusfeld&#x2F;dp&#x2F;0321088123" rel="nofollow">https:&#x2F;&#x2F;www.amazon.com&#x2F;Age-Economist-9th-Daniel-Fusfeld&#x2F;dp&#x2F;0...</a><p>Edit: for formatting.
Ask HN: What are the best practises for using SSH keys?
&gt; Is it better to use a different passphrase on each key, or does using the same one not matter much?<p>Using a passphrase is <i>highly</i> recommended except for server-to-server accounts, which should be locked down (and specify the specific command that server can execute in the authorized_keys file - Userify[1] supports this).<p>You should <i>definitely</i> use a different passphrase for keys stored on separate computers, and it&#x27;s not a bad idea to use a different passphrase for separate keys stored on the same computer, especially if they have different servers they can access. However, practically speaking, if your computer was compromised (ie keylogger etc) then it&#x27;s game over anyway.<p>&gt; Does increasing the amount of bits in a key really have an effect on the security of the key, or does it not make much difference in a real-world use?<p>Yes, it does make a difference, depending on what you mean by &quot;real-world&quot;. Anyone less than a state-level actor will probably be unable to cost-effectively attack even a 1024 bit key, but that won&#x27;t be true for long. We suggest 2048 bit keys if you are using RSA, with 4096 if you prefer extra security and don&#x27;t mind slight latency during a connection, or ED25519 for keys on systems that support it. Generally the defaults are pretty good. We have a HOWTO for different OS&#x27;s here: <a href="https:&#x2F;&#x2F;userify.com&#x2F;docs&#x2F;generating-ssh-keys-on-ec2&#x2F;" rel="nofollow">https:&#x2F;&#x2F;userify.com&#x2F;docs&#x2F;generating-ssh-keys-on-ec2&#x2F;</a><p>&gt; How much less secure is it to not use a passphrase on a key?<p>From the server&#x27;s perspective, it&#x27;s EXACTLY the same, but from the client (your laptop&#x27;s) side, it&#x27;s completely different. While it&#x27;s possible that your laptop could still contain your decrypted key in its key manager&#x27;s RAM or suspended state (ie unencrypted swap file etc), the use of a passphrase even on (actually, ESPECIALLY on) a non-full-disk encrypted system will raise the level of effort to access your key to near-impossibility levels, especially from non-state actors, whereas a key that has NO passphrase is a piece of cake. Use a passphrase EVEN WITH full disk encryption (for example, the evil maid attack)<p>&gt; Should you use a different key per user account, per server, or per use-case (i.e. personal or work)?<p>If you&#x27;re using a different key and storing them on different computers, you should probably use a different passphrase on each key. The passphrase (or even if one exists) is not visible to remote servers (or Userify[1] - we provide a free-text field that becomes your authorized_keys on remote servers.)<p>You don&#x27;t need to use a different key per user account, although you can. You also should not use a different key per server.. that will turn into a management nightmare. It&#x27;s perfectly ok to use one key everywhere, but you should probably use a different key on your laptop and desktop, or if the keys have different levels of access (Userify[1] can automate that for you too).<p>&gt; How&#x2F;Where should private keys be stored on a device using them?<p>Ideally on a device using full-disk encryption, including swap and laptop suspend space, to prevent access to a decrypted key in RAM (you are using a passphrase, right?). However, FDE does not protect you from other compromises on your system (i.e., another user that gains escalation to root and installs a key logger), and does not protect against a compromise of your BIOS (i.e., Intel UEFI) or boot process (evil maid attack again).<p>&gt; What are some of the pros and cons from a security standpoint, and how may doing different things affect the usability of a key?<p>Keys are safer than certificates because there are less moving parts and no outside requirements for your internal CA or dependency on a CA that might go down. Keys can be a management nightmare at scale, but there is software to manage them (ie Userify[1], ManageEngine[2], BeyondTrust[3], ssh universal key manager[4], keybox[5] (free&#x2F;open source), etc). If you are doing a small project with few team members, you can also do management with Chef, Puppet, etc, or just by hand.<p>In terms of usability, a real key solution that manages keys across entire groups of servers with a few clicks can be really helpful... you can do all of the regular SSH things like tunneling (replace stun&#x2F;sslwrap, etc), proxying all of your other traffic (SOCKS5), keep SSH connection alive (autossh etc), smart ban based on failed attempts (fail2ban, deny hosts), forward encrypted X11 or VNC connections, forward SSH itself (tunnel SSH within itself), and so much more.<p>We&#x27;re going to start blogging about all the awesome things you can do with SSH soon, since it&#x27;s really an amazing and deep protocol.<p>1. Userify <a href="https:&#x2F;&#x2F;userify.com" rel="nofollow">https:&#x2F;&#x2F;userify.com</a> Free cloud and on-premises versions available; full disclosure: I work there<p>2. ManageEngine: <a href="https:&#x2F;&#x2F;www.manageengine.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.manageengine.com&#x2F;</a><p>3. BeyondTrust: <a href="https:&#x2F;&#x2F;www.beyondtrust.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.beyondtrust.com&#x2F;</a><p>4. SSH Universal Key Manager: <a href="http:&#x2F;&#x2F;www.ssh.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.ssh.com&#x2F;</a> (no TLS?)<p>5. Keybox <a href="http:&#x2F;&#x2F;sshkeybox.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;sshkeybox.com&#x2F;</a>
Is Palantr discriminating against Asians?
Disclaimer: The plural of anecdote is anecdotes.<p>Context #1: I am Asian.<p>It was during intern recruiting season, at a campus recruiting fair, when I talked to Palantir. I felt the &#x27;interview&#x27; with the campus rep went well. In the ensuing weeks, I receive &#x27;real&#x27; follow-up interviews from every other company present at the career fair that I made an active effort of applying for. By that, I mean research the company&#x27;s financials, history, products and culture, then writing catered personal statements and resumes. Every company except Palantir. I, being young and conspiratory, just assumed Palantir, being an enigmatic blackbox analytics engine, ran my resume through a government sponsored quantum computers and decided I wasn&#x27;t a fit. And that was okay.<p>Context #2: The CS department I attended is well-regarded. (If selectivity is any indicator, whether causally or correlational, the average underclassmen GPA to get admitted into the major is around a 3.8. US News ranks the department in the &#x27;top 10&#x27; CS programs, albeit that&#x27;s also a proxy since these rankings are based on graduate programs). Not to rest on these laurels, but my resume also include things beyond school: student organizations, volunteering, personal projects, etc.<p>After college, I left my carefree internship days behind for a &#x27;real job&#x27; at FinTech company doing BIG DATA and Data Science, wooooo. Again, I applied to Palantir.<p>Context #3: I&#x27;ve received job offers from big four tech companies, startups, as well as financial institutions in Chicago &#x2F; NY &#x2F; Hong Kong &#x2F; Singapore &#x2F; London. This is not intended to brag or show that I am exceptional. To the contrary, this reflects that I am the very antithesis of an aberration, insofar that I fit the job qualifications that employers seek.<p>At least I got an automated response from Palantir this time. Yay, I exist.<p><i>We regret to inform you that we do not have a position which currently matches your background and experience at this time</i><p>Let&#x27;s not presuppose Palantir uses a machine to select candidates. Assume Palantir has some statistical &#x2F; classification &#x2F; decision model in the abstract, which could be implemented in the form of human recruiters, company culture &#x2F; managerial, or an actual machine.<p>What would Palantir&#x27;s preference for the false-positive versus false-negative rate of this model be? I currently work at Google which is notorious for its high false negative rate (turning many suitable candidates away). Is Palantir&#x27;s false negative rate so high such that I can land a job at Google and many other places, and not even get a phone screen at Palantir? Palantir makes software for counter-terrorism, surely they are cognizant of the implication of these Type I, Type II errors. Landing a job at a company X, and a company Y, are not statically independent events. So this outcome is just odd to me. Maybe I&#x27;m on a terrorist watch list?<p>Context #4: In my Software Engineering capstone during college, I was (assigned) to a group of 7 students, including myself. We were supposed to work on a quarter-long project. Everyone got along. Everyone contributed. Everyone was willing to compromise. Except for one person. This person contributed zero lines of code. They bossed everyone around. Where the rest of the team were vectors moving in relatively the same direction, this person was a drag force. This designated, errr, assigned &quot;leader&quot; constantly derailed the team&#x27;s direction. Halfway through the quarter, this person dropped the course and didn&#x27;t inform anyone on our team. I had to fill in for this person&#x27;s hand-wavy role (mostly bureaucratic requirements the course instructor set forth to make sure people were working: a student managing another student and writing reports about it....). I&#x27;ve worked with a hundred people over the years, but this was the first and only time I&#x27;ve met someone who added negative value. I was STUCK in a handful of courses with this person, aswell. They would constantly ask selfish questions during lecture just to show off how smart they are. They cheated on the exams, too. This person went to work at Palantir right after college.<p>What features &#x2F; variables &#x2F; factors does Palantir select for in Software Engineers? CS Degree? A good school program? Relevant work experience? Last name? Treating me as a data point, there must be some other factor that far negatively outweighs the positive points on my resumes. Whatever this scarlet letter was, my irresponsible selfish classmate did not possess this. This classmate was not Asian (this isn&#x27;t too important of a detail to me , but I anticipate someone to ask a follow-up question on this). I&#x27;m not even that upset about the accusations of racial discrimination. I&#x27;m more personally irked by the fact that companies have this veil of stupidity in their hiring process where many qualified people do not make it through for whatever reason while dishonest, selfish, incompetent, irresponsible, non-team players slip through the HUGE cracks.
Ask HN: When are you considered a “senior” programmer?
You will never have enough time to learn everything and be all things to everyone. Get this idea out of your head right now or you will never find happiness.<p>Senior is the difference between keeping your eye on the big picture and helping to move your team forward to the objective in a timely manner to achieve the business objectives that drive the company forward. It&#x27;s the ability to step up and lead your team when called for. It&#x27;s the ability to make decisions balanced between what&#x27;s technically right in the short and longer term without losing sight of the end goal.<p>Never forget that you&#x27;re not paid to deliver software just to deliver amazing software. The software you deliver is a tool, a means to an end. That may be to cut costs, it may be to increase profits, it may be the lifeblood that your company&#x27;s stock price hangs on.<p>A junior developer may be amazing with the tools provided and may have some good architectural sense. They may need some, or a lot of hand holding. A junior developer generally has their head in the code most of the time and may but probably shouldn&#x27;t be expected to understand or care about the objectives of the business as a whole. You give them a feature to develop and can largely expect that they will need all of the dependencies to hand. They may have a good handle on debugging and unit, integration and functional testing or this may be something they need to learn. This is OK.<p>An intermediate developer can be given objectives regarding code and architecture and left to their own devices and trusted to deliver on their objectives in a timely manner. By this time, you should expect to at least understand the business objectives and be able to think critically about the code they&#x27;re providing in order to meet those objectives. I would expect an intermediate developer to have enough of a clue about architecture that handed a feature requirement and some architectural direction for how to integrate it, they could architect it competently and integrate it and know where to go to ensure any dependencies are satisfied. They will have a good handle on debugging and at least unit and integration testing. They may have a good handle on functional testing and debugging production code.<p>A senior developer is someone in my mind who who can be trusted with the business objectives, can chase down architectural advice, from an architect or UX input or whatever else they need to get the job done; they can communicate effectively with stakeholders and the business; they can be expected to dig in and fill any gaps that would prevent delivery or cause problems in production. They can delegate pieces appropriately and deliver what is expected in the allotted time frame. They may be someone that can step up as team lead&#x2F;team manager, or lead from the back and be the glue that gives the team cohesion. They can be expected to have the discipline to take care of things properly when nobody is watching. They can be expected to help debug production issues and be among the first to muck in when the shit hits the fan to help resolve production issues.<p>So you see, the difference between junior, intermediate and senior doesn&#x27;t have an awful lot to do with code or tools. You will expected to either be or become a master of your tools whether junior, intermediate or senior. You will be expected to do this on the fly, on the job, regardless of everything else that is going on around you. This is part of being in this industry. You will be expected to keep up with the codebase and dig in and understand it at whatever level you&#x27;re at. These are all prerequisites for your job as a developer, they are not a prerequisite for your title. There&#x27;s a big difference.<p>If you want to make the jump from junior to senior quickly, here&#x27;s my advice: Find the most gnarly difficult problems your company is having and dig in and help solve them consistently. When you&#x27;ve put yourself through the wringer; when you&#x27;ve suffered the late nights, the stress, the anguish about whether or not you&#x27;ve got what it takes to do this job. Do this until you get to a point where you think you&#x27;ve seen every last problem that could possibly occur, and despite that, something else hits you out of left field and knocks you clean off your feet. Do this until when this happens, you just get back up and keep going. When you get knocked down and get back up when everyone else would say fuck it, when you can be trusted to make shit happen when everyone else would say fuck it - this is when you can call yourself a senior developer.<p>&quot;Out of the 39 000 men and women that make up the United States Coast Guard there are only 280 rescue swimmers. This is because we are the Coast Guard&#x27;s elite. We are the best of the best. When storms shut down entire ports, we go out. When hurricanes ground the United States Navy, we go out. And when the holy Lord himself reaches down from heaven and destroys his good work with winds that rip houses off the ground, We. Go. Out.&quot; - Ben Randall, The Guardian<p>Live by example.
The terrorist inside my husband's brain
I&#x27;m seeing any number of themes in this piece that call out for discussion, only a few of which are being picked up here.<p><i>Mental health, stigma, betrayal, and volition</i><p>The whole nature of mental health and stimatisation runs deep in contemporary society and this article. Even with acknowledged issues, Williams likely hid the most troublesome symptom, hallucinations, from his wife and others.<p>Unlike physical disease or injury, which can be considered happening to us or our containers -- bodies -- disease of the mind <i>fundamentally affects our very ideas of identity and perception.</i> When a person&#x27;s responses to the world change, when their recollection of events turns unreliable, when their response to the present becomes chaotic, when they themselves cannot trust the messages of their own sense, you&#x27;re diving into some very deep, dark waters. Interacting with, caring for, and living with the mentally ill is exceptionally taxing. Norms of social behavior fail to exist, and the least interaction can become both a trial of comprehension and a battle of wills (though not necessarily this). And patterns which were once firmly established change, by the week, sometimes by the day or hour.<p>This is a reason that the role of primary caregiver is such a tremendously challenging one.<p>The response of others, including medical professionals, is also taxing. Normal expectations of volition and will simply do not apply. When there&#x27;s an organic, chemical, or pathological underpinning to behavior, it&#x27;s not simply a matter of &quot;just try harder&quot; or &quot;you&#x27;re smart and capable&quot;. To the point that comments suggesting this themselves become tremendously painful.<p><i>Celebrities and disease</i><p>For better or worse, a characteristic of fame and celebrity is that they focus attention. Susan Schneider Williams&#x27;s essay on her celebrity husban, Robin Williams&#x27;s encounter with a rare, difficult to diagnose, and profoundly<p>There&#x27;s a tension at HN over whether or not authors or personalities matter, are relevant, or should be disclosed. I feel rather strongly that they do. HN management disagree. There&#x27;s a recent discussion of that here:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12573874" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12573874</a><p>The fact that this story concerns Robin Williams, famous and beloved comedian and actor, is salient <i>if only because it means that he received care, diagnostic, and autopsy attention that few other patients would receive.</i> If <i>not</i> for his fame and affluence, this would be just another tragic death, likely by suicide and depression. Instead, we&#x27;ve a deeper understanding of the real mechanisms at play.<p>The story has similarities to the Irvine &quot;PTA mom&quot; story -- a drugs bust turned into a story of framing and false accusations. But for particulars of place and social status, <i>that</i> story could have had a very different ending.<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12616118" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12616118</a><p><i>Disease as metaphor</i><p>The inability to rely on established norms, prior patterns, experiences, and personality are where I see the titular concept coming into play. <i>The condition here violates both the patient&#x27;s and the author&#x27;s fundamental trust in the Universe.</i> Robin Williams couldn&#x27;t trust his own senses, and was, literally losing that which was most central to any of us: his mind. Susan was losing the friend, partner, and husband, to something she couldn&#x27;t see, couldn&#x27;t name, didn&#x27;t understand, and couldn&#x27;t combat. I cannot think of a better description of terror than that: to be threatened by an omnipresent, invisible, awesomely powerful, and hugely destructive enemy, with no sense of when or how it would strike next, and no effective means to defend against it.<p><i>Systems, understanding, and response</i><p>There&#x27;s a thread here about the failure of modern medicine, and perhaps the US healthcare system specifically, to address sufficiently complex and systemic conditions. Again I&#x27;m disappointed in much of the HN follow-up, which incorrectly interprets the @guelo&#x27;s comments as being specific to programming. They are not.<p>The problem is a general one: our perceptions -- both our &quot;five senses&quot;[1] and those extended through technically-mediated, extended, or created sensing capabilities -- only inform us of <i>very</i> topical conditions. It&#x27;s up to the diagnostician to draw deeper inferences.<p>As I commented on the linked thread, perversely, the deeper and more complex our understanding and knowledge, the greater the tendency toward <i>non-systemic</i> thinking, or at least of creating a loose flying swarm of individual specialist none of whom have a large-picture view. The roots are numerous (taking a systemic view of non-systemic vision): education, specialisation, compensation, healthcare administration, research, drugs and therapy development, and more. The result is having to run rough herd over providers to ensure that the full patient is being considered, not just some interesting subsystem behavior.<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12620044" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=12620044</a><p><i>Understanding vs. cure</i><p>There is, finally, the problem that understanding is a <i>possible route</i> to a cure, but is neither sufficient nor necessary. There are treatments which have worked without understanding (salycilic acid, from willow bark, against headache, and citrus, against scurvy, as two historical examples), and there are cases in which additional information remains stubbornly ineffective in promulgating effective treatment.<p>A good friend of mine died some 25 years ago from a condition which was then rare, poorly understood, difficult to diagnose, and stubbornly resistant to treatment. A quarter century of medical advance has rewritten that sentence only very slightly: the specific chromosomal nature of the condition is now understood, and a genetic test could identify the gene transposition triggering the condition (though not the triggers of that transposition, yet). So to that extent, the condition is better understood.<p>It remains only poorly treatable, with many cases having a prognosis of 50% to 90% mortality, and the specific therapies date to the 1970s, 1960, and 1950s, or before, with little if any change. One&#x27;s views of medical advances can be somewhat coloured by such experiences, and what I&#x27;ve observed is that much of what&#x27;s proclaimed to be improvements in medicine can be broken down two two general mechanisms:<p>1. Improvements in baseline medical care available to all.<p>2. Specific and frequently very highly targeted advances. These can be tremendously beneficial, within those narrow areas, but as with complex keys, the locks fitted are frequently few in number.<p>There are exceptions and potential exceptions. Broad-spectrum antibiotics and development of vaccinations both provided tools to address a wide range of threats. Gene sequencing and synthesis, and stem cell treatments, offer some promise of broad new areas of therapeutic mechanism. In large part though, genetic medicine has been more diagnostic than therapeutic.<p>What understanding of mechanism <i>does</i> allow though is twofold.<p>First, having a known enemy, one who can be faced and seen, removes a significant element of the dread of the <i>unknown assailant</i>, which can have some comfort.[2] Even if the result is no net curative medical therapy, the path becomes known, and perhaps mechanisms for symptomatic treatment or palliative care.<p>The hope, of course, is that knowing <i>cause</i> one may focus on <i>cure</i>, or at least, to borrow from the military metaphor, counterattack. That&#x27;s not certain, but it is a possibility.<p>Another element, tying in with the notion of systemic approach, above, is the thought when faced with some set of phenomena, a complex of symptoms, of considering &quot;what is the possible common underlying element here?&quot; Again, treatment of independent symptoms by specialists tends to draw away from this, but a reasonable thought, not just in medical circumstances, is: supposing we <i>did</i> have a deeper understanding of this, or more complete diagnostics, what then could we do <i>or could we hope to achieve</i>?<p>________________________________<p>Notes:<p>1. There are actually significantly more than five, though the convention &quot;five senses&quot; of sight, hearing, smell, taste, and touch, persists. A good general text on perceptual psychology makes fascinating reading.<p>2. There&#x27;s a surprisingly relevant concept from Adam Smith&#x27;s <i>Wealth of Nations</i>. Looking up his use of the workd &quot;invisible&quot;, I found <i>two</i> mentions. One the greatly misrepresented &quot;invisible hand&quot;. The other though refers to the &quot;invisible death&quot; faced by combatants in modern (that is, gunpowder) warfare:<p>*the noise of firearms, the smoke, and the invisible death to which every man feels himself every moment exposed as soon as he comes within cannon-shot, and frequently a long time before the battle can be well said to be engaged, must render it very difficult to maintain any considerable degree of this regularity, order, and prompt obedience, even in the beginning of a modern battle. In an ancient battle there was no noise but what arose from the human voice; there was no smoke, there was no invisible cause of wounds or death. Every man, till some mortal weapon actually did approach him, saw clearly that no such weapon was near him.... In these circumstances...it must have been a good deal less difficult to preserve some degree regularity and order.&quot;<p>Which is to say, Smith here is addressing specifically the terror of facing an unseen, unpredictable, and deadly threat.<p><a href="https:&#x2F;&#x2F;en.m.wikisource.org&#x2F;wiki&#x2F;The_Wealth_of_Nations&#x2F;Book_V&#x2F;Chapter_1" rel="nofollow">https:&#x2F;&#x2F;en.m.wikisource.org&#x2F;wiki&#x2F;The_Wealth_of_Nations&#x2F;Book_...</a>
We're Sequencing Every Member of the Weirdest Bird Species on Earth
The kākāpō truely is an eccentric creature - in the 80s Douglas Adams and zoologist Mark Carwardine went out to find one for the BBC radio documentary series &quot;Last Chance to See&quot;. The chapter in its accompanying book is called &quot;Heartbeats in the night&quot; - even more fun to read than watching the documentary[1] Carwardine did with Stephen Fry some 20 years later:<p>.. when eventually European settlers arrived and brought cats and dogs and stoats and possums with them, a lot of New Zealand&#x27;s flightless birds were suddenly waddling for their lives. The kiwis, the takahes - and the old night parrots, the kakapos.<p>Of these the kakapo is the strangest. Well, I suppose the penguin is a pretty peculiar kind of creature when you think about it, but it&#x27;s quite a robust kind of peculiarness, and the bird is perfectly well adapted to the world in which it finds itself, in a way that the kakapo is not. The kakapo is a bird out of time. If you look one in its large, round, greeny-brown face, it has a look of serenely innocent incomprehension that makes you want to hug it and tell it that everything will be all right, though you know that it probably will not be.<p>It is an extremely fat bird. A good-sized adult will weigh about six or seven pounds, and its wings are just about good for waggling a bit if it thinks it&#x27;s about to trip over something - but flying is completely out of the question. Sadly, however, it seems that not only has the kakapo forgotten how to fly, but it has also forgotten that it has forgotten how to fly. Apparently a seriously worried kakapo will sometimes run up a tree and jump out of it, whereupon it flies like a brick and lands in a graceless heap on the ground.<p>By and large, though, the kakapo has never learnt to worry. It&#x27;s never had anything much to worry about.<p>Most birds, faced with a predator, will at least realise that something&#x27;s up and make a bolt for safety, even if it means abandoning any eggs or chicks in its nest - but not the kakapo. Its reaction when confronted with a predator is that it simply doesn&#x27;t know what the form is. It has no conception of the idea that anything could possibly want to hurt it, so it tends just to sit on its nest in a state of complete confusion and leaves the other animal to make the next move - which is usually a fairly swift and final one.<p>it&#x27;s frustrating to think of the difference that language would make. The millennia crawl by pretty bloody slowly while natural selection sifts its way obliviously through generation after generation, favouring the odd aberrant kakapo that&#x27;s a little twitchier than its contemporaries till the species as a whole finally gets the idea. It would all be cut short in a moment if one of them could say, &#x27;When you see one of those things with whiskers and little bitey teeth, run like hell.&#x27; On the other hand, human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.<p>The trouble is that this predator business has all happened rather suddenly in New Zealand, and by the time nature starts to select in favour of slightly more nervous and fleet-footed kakapos, there won&#x27;t be any left at all, unless deliberate human intervention can protect them from what they can&#x27;t deal with themselves. It would help if there were plenty of them being born, but this brings us on to more problems. The kakapo is a solitary creature: it doesn&#x27;t like other animals. It doesn&#x27;t even like the company of other kakapos. One conservation worker we met said he sometimes wondered if the mating call of the male didn&#x27;t actively repel the female, which is the sort of biological absurdity you otherwise only find in discotheques. The ways in which it goes about mating are wonderfully bizarre, extraordinarily long drawn out and almost totally ineffective.<p>Here&#x27;s what they do:<p>The male kakapo builds himself a track and bowl system, which is simply a roughly dug shallow depression in the earth, with one or two pathways leading through the undergrowth towards it. The only thing that distinguishes the tracks from those that would be made by any other animal blundering its way about is that the vegetation on either side of them is rather precisely clipped.<p>The kakapo is looking for good acoustics when he does this, so the track and bowl system will often be sited against a rock facing out across a valley, and when the mating season arrives he sits in his bowl and booms.<p>This is an extraordinary performance. He puffs out two enormous air sacs on either side of his chest, sinks his head down into them and starts to make what he feels are sexy grunting noises. These noises gradually descend in pitch, resonate in his two air sacs and reverberate through the night air, filling the valleys for miles around with the eerie sound of an immense heart beating in the night.<p>The booming noise is deep, very deep, just on. the threshold of what you can actually hear and what you can feel. This means that it carries for a very great distances, but that you can&#x27;t tell where it&#x27;s coming from. If you&#x27;re familiar with certain types of stereo set-up, you&#x27;ll know that you can get an additional speaker called a sub-woofer which carries only the bass frequencies and which you can, in theory, stick anywhere in the room, even behind the sofa. The principle is the same - you can&#x27;t tell where the bass sound is coming from.<p>The female kakapo can&#x27;t tell where the booming is coming from either, which is something of a shortcoming in a mating call. `Come and get me!&#x27; `Where are you?? &#x27;Come and get me!&#x27; &#x27;Where the hell are you?&#x27; `Come and get me!&#x27; `Look, do you want me to come or not?&#x27; `Come and get me!&#x27; &#x27;Oh, for heaven&#x27;s sake.&#x27; `Come and get me!&#x27; &#x27;Go and stuff yourself,&#x27; is roughly how it would go in human terms.<p>[1] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=wrmi5UV6-wk" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=wrmi5UV6-wk</a>
A list of all Android permissions
At my company (MixRank), we do static analysis on apps for business intelligence. We&#x27;ve collected lots of data on permissions, SDKs, integrations, frameworks, etc. If anyone has any interesting queries they&#x27;d like me to run, I can give it a shot.<p>Here&#x27;s the most popular permissions we&#x27;ve identified:<p><pre><code> permission | apps | pct_apps ------------------------------------------------------------+---------+---------- android.permission.INTERNET | 2607544 | 92.90 android.permission.ACCESS_NETWORK_STATE | 2332429 | 83.10 android.permission.READ_EXTERNAL_STORAGE | 1707550 | 60.83 android.permission.WRITE_EXTERNAL_STORAGE | 1696178 | 60.43 android.permission.READ_PHONE_STATE | 1154963 | 41.15 android.permission.WAKE_LOCK | 989176 | 35.24 android.permission.ACCESS_WIFI_STATE | 944675 | 33.65 android.permission.ACCESS_FINE_LOCATION | 798122 | 28.43 android.permission.ACCESS_COARSE_LOCATION | 770878 | 27.46 android.permission.VIBRATE | 734366 | 26.16 com.google.android.c2dm.permission.RECEIVE | 687083 | 24.48 android.permission.GET_ACCOUNTS | 676143 | 24.09 android.permission.CAMERA | 436581 | 15.55 android.permission.RECEIVE_BOOT_COMPLETED | 429658 | 15.30 android.permission.RECORD_AUDIO | 243793 | 8.68 android.permission.GET_TASKS | 235717 | 8.39 android.permission.CALL_PHONE | 218079 | 7.76 com.android.vending.BILLING | 200927 | 7.15 android.permission.READ_CONTACTS | 190848 | 6.79 com.google.android.providers.gsf.permission.READ_GSERVICES | 188162 | 6.70 android.permission.SYSTEM_ALERT_WINDOW | 176363 | 6.28 com.android.launcher.permission.INSTALL_SHORTCUT | 164649 | 5.86 android.permission.SET_WALLPAPER | 156845 | 5.58 android.permission.ACCESS_LOCATION_EXTRA_COMMANDS | 131898 | 4.69 android.permission.WRITE_SETTINGS | 122783 | 4.37 android.permission.USE_CREDENTIALS | 110675 | 3.94 android.permission.BLUETOOTH | 106272 | 3.78 android.permission.MODIFY_AUDIO_SETTINGS | 102882 | 3.66 android.permission.SEND_SMS | 97892 | 3.48 android.permission.WRITE_CONTACTS | 97585 | 3.47 com.android.browser.permission.READ_HISTORY_BOOKMARKS | 97147 | 3.46 android.permission.CHANGE_WIFI_STATE | 97074 | 3.45 android.permission.READ_CALL_LOG | 90001 | 3.20 com.android.vending.CHECK_LICENSE | 79279 | 2.82 android.permission.BLUETOOTH_ADMIN | 78199 | 2.78 android.permission.FLASHLIGHT | 74952 | 2.67 android.permission.RECEIVE_SMS | 73898 | 2.63 android.permission.BROADCAST_STICKY | 67092 | 2.39 android.permission.DISABLE_KEYGUARD | 66887 | 2.38 com.android.browser.permission.WRITE_HISTORY_BOOKMARKS | 65765 | 2.34 android.permission.READ_CALENDAR | 61989 | 2.20 android.permission.WRITE_CALENDAR | 61327 | 2.18 android.permission.READ_LOGS | 60059 | 2.13</code></pre>
Sam Altman’s Manifest Destiny
Interesting article giving more of an inside look to the organization and its people. Some choice quotes and comments from a random dude on the internet:<p>&gt; After conferring with the accelerator’s sixteen other partners, Altman launched an initiative to support startups even earlier in their life span, and a fund to continue investing in them as they grow.<p>&gt; “Sam said, ‘Take all the “M”s and make them “B”s.’ ”<p>Badass. Reminded me of &quot;Drop the &#x27;The&#x27;&quot;.<p>&gt; A 2012 study of North American accelerators found that almost half of them had failed to produce a single startup that went on to raise venture funding.<p>It&#x27;s very much a &quot;me-too&quot; endeavor, almost political at the universities local to where I live, aimed at appearing to &quot;foster innovation&quot; and justifying huge continued government loans for their &quot;customers&quot;.<p>&gt; YC provides instant entrée to Silicon Valley—a community that, despite its meritocratic rhetoric, typically requires a “warm intro” from a colleague, who is usually a white man.<p>Yes the meritocratic aspect of networking is far overblown. Intros or bust in my experience, unless someone gives you a lucky break. If they do, be grateful and be damn sure to impress that person.<p>&gt; All the early arrivals at the party were men; the batch’s female founders were attending a presentation on the challenges of being a female founder.<p>Anyone else find this kind of ironic and counterproductive?<p>&gt; “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”<p>This is funny, he almost sounds like one of those awful Trump supporters! :P<p>&gt; In 2012, he and the other founders sold the company for forty-three million dollars—a negative return for their V.C.s.<p>This isn&#x27;t as bad as it sounds. Losing a few million dollars vs. around $50M is a huge difference. In fact I&#x27;d say it&#x27;s harder to salvage a failing company that exit a successful one.<p>&gt; “you want to invest in messy, somewhat broken companies. You can treat the warts on top, and because of the warts the company will be hugely underpriced.”<p>Value investing, absolutely. I tend to apply this principle on a smaller scale, to individual people. For example, I&#x27;ve had the opportunity to hire and work with extremely intelligent and hardworking people who simply lacked some basic English skills, and had shit cover letters. But who cares if they can program a robotic system the way I need.<p>&gt; “hard things are actually easier than easy things. Because people feel it’s interesting, they want to help. Another mobile app? You get an eye roll. A rocket company? Everyone wants to go to space.”<p>So incredibly true. I have a startup with very few resources but no problem recruiting because the field is exciting hard tech.<p>&gt; “Most people do too many things. Do a few things relentlessly.”<p>The best advice for life success, hands down.<p>&gt; “What’s it looking for?” I asked Altman. “I have no idea,” he replied. “That’s the unsettling thing about neural networks—you have no idea what they’re doing, and they can’t tell you.”<p>Well I find this response - which someone in the domain would tell you isn&#x27;t even correct - to be unsettling. Yes, we can see what lots of neural networks are doing, and understand it [but you often need a PhD].<p>&gt; “Sam’s program for the world is anchored by ideas, not people,” Peter Thiel said. “And that’s what makes it powerful—because it doesn’t immediately get derailed by questions of popularity.”<p>But &quot;Why do these fuckers get to decide what happens to me?&quot;<p>&gt; Paul Graham cheerfully acknowledged that, by instilling message discipline, “we help the bad founders look indistinguishable from the good ones.”<p>This would make me a little unsettled as an external investor but I understand the incentives and dynamics at play.<p>&gt; So we used accounts protected, a number that showed roughly thirty-per-cent growth through the course of YC—and about forty per cent of the accounts were YC companies. It was a perfect fairy-tale story.”<p>Yeah this never made too much sense to me. As an investor I&#x27;d be wary of a startup using alumni connections to secure a large portion of their customers, because that eventually runs out - meaning the acquistion strategy is still essentially untested. As a founder though, I really like this strategy and would readily replicate it.<p>&gt; The truth is that rapid growth over a long period is rare, that the repeated innovation required to sustain it is nearly impossible, and that certain kinds of uncontrollable growth turn out to be cancers.<p>I forgot exactly where I learned this, but I like the story of &quot;multiple S curves&quot; over a single &quot;J&quot; curve. Businesses innovate and attach new markets, launch new product lines, secure new customer segments, etc.<p>&gt; Peter Thiel, the forty-eight-year-old libertarian who co-founded PayPal and Palantir, secretly funded the lawsuit that drove Gawker Media into bankruptcy, and has sought to extend his life span by taking human growth hormone.<p>Damn so is Thiel gonna get jacked? This makes me feel less bad about wanting to do a steroid cycle when I&#x27;m older.<p>&gt; The fear is that YC will soon provide cradle-to-I.P.O. funding for so many top startups that it will put a lot of V.C.s out of business.<p>Well this is why I find them remarkably pro-founder: they are creating competition at the VC level even if the later stages.<p>&gt; two tech billionaires have gone so far as to secretly engage scientists to work on breaking us out of the simulation.<p>Who knew, even billionaires get high and watch The Matrix.<p>&gt; recently announced a nonpartisan project, called VotePlz, aimed at getting out the youth vote.<p>Encouraging youth voting will absolutely be partisan, let&#x27;s not kid ourselves here. Why not just pay people to vote, or give them deals at retail stores? I&#x27;m sure these things will happen.<p>&gt; As I considered this, he said that he’d sacrifice a hundred thousand. I told him that my own tally would be even larger. “It’s a bug,” he declared, unconsoled.<p>It&#x27;s a feature.<p>&gt; the cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food.<p>Here&#x27;s my controversial theory: a &quot;great life&quot; only exists relative to others. Once all needs are provided for and people don&#x27;t need each other (and here&#x27;s the real controversial idea: most women won&#x27;t need most men, something that I would understandably expect Altman to miss), they won&#x27;t be incentivized to create and maintain strong family units. We won&#x27;t love like we used to. But maybe they see this as a &quot;bug&quot; too.<p>Maybe I&#x27;m wrong and a few centuries of tech will solve the problems of human nature created by a few million years of evolution.<p>&gt; More generally, he observed, “The missing circuit in my brain, the circuit that would make me care what people think about me, is a real gift. Most people want to be accepted, so they won’t take risks that could make them look crazy—which actually makes them wildly miscalculate risk.”<p>A gift and a curse that I wrestle with constantly.<p>&gt; “At the end of his life, when he may have been somewhat senile, he did also say that it should all be sunk to the bottom of the ocean. There’s something worth thinking about in there.”<p>This was a great read and it made me think a lot, it was a pleasure.
Industry Concerns about TLS 1.3
Here&#x27;s the exchange, I found this hard to read without word-wrapping:<p><pre><code> &gt; On 22 Sep 2016, at 20:27, BITS Security &lt;BITSSecurity at fsroundtable.org&gt; wrote: &gt; &gt; To: IETF TLS 1.3 Working Group Members &gt; &gt; My name is Andrew Kennedy and I work at BITS, the technology policy &gt; division of the Financial Services Roundtable &gt; (http:&#x2F;&#x2F;www.fsroundtable.org&#x2F;bits). My organization represents &gt; approximately 100 of the top 150 US-based financial services &gt; companies including banks, insurance, consumer finance, and asset &gt; management firms. &gt; &gt; I manage the Technology Cybersecurity Program, a CISO-driven forum &gt; to investigate emerging technologies; integrate capabilities into &gt; member operations; and advocate member, sector, cross-sector, and &gt; private-public collaboration. &gt; &gt; While I am aware and on the whole supportive of the significant &gt; contributions to internet security this important working group has &gt; made in the last few years I recently learned of a proposed change &gt; that would affect many of my organization&#x27;s member institutions: the &gt; deprecation of RSA key exchange. &gt; &gt; Deprecation of the RSA key exchange in TLS 1.3 will cause &gt; significant problems for financial institutions, almost all of whom &gt; are running TLS internally and have significant, security-critical &gt; investments in out-of-band TLS decryption. &gt; &gt; Like many enterprises, financial institutions depend upon the &gt; ability to decrypt TLS traffic to implement data loss protection, &gt; intrusion detection and prevention, malware detection, packet &gt; capture and analysis, and DDoS mitigation. Unlike some other &gt; businesses, financial institutions also rely upon TLS traffic &gt; decryption to implement fraud monitoring and surveillance of &gt; supervised employees. The products which support these capabilities &gt; will need to be replaced or substantially redesigned at significant &gt; cost and loss of scalability to continue to support the &gt; functionality financial institutions and their regulators require. &gt; &gt; The impact on supervision will be particularly severe. Financial &gt; institutions are required by law to store communications of certain &gt; employees (including broker&#x2F;dealers) in a form that ensures that &gt; they can be retrieved and read in case an investigation into &gt; improper behavior is initiated. The regulations which require &gt; retention of supervised employee communications initially focused on &gt; physical and electronic mail, but now extend to many other forms of &gt; communication including instant message, social media, and &gt; collaboration applications. All of these communications channels &gt; are protected using TLS. &gt; &gt; The impact on network diagnostics and troubleshooting will also be &gt; serious. TLS decryption of network packet traces is required when &gt; troubleshooting difficult problems in order to follow a transaction &gt; through multiple layers of infrastructure and isolate the fault &gt; domain. The pervasive visibility offered by out-of-band TLS &gt; decryption can&#x27;t be replaced by MITM infrastructure or by endpoint &gt; diagnostics. The result of losing this TLS visibility will be &gt; unacceptable outage times as support groups resort to guesswork on &gt; difficult problems. &gt; &gt; Although TLS 1.3 has been designed to meet the evolving security &gt; needs of the Internet, it is vital to recognize that TLS is also &gt; being run extensively inside the firewall by private enterprises, &gt; particularly those that are heavily regulated. Furthermore, as more &gt; applications move off of the desktop and into web browsers and &gt; mobile applications, dependence on TLS is increasing. &gt; &gt; Eventually, either security vulnerabilities in TLS 1.2, deprecation &gt; of TLS 1.2 by major browser vendors, or changes to regulatory &gt; standards will force these enterprises - including financial &gt; institutions - to upgrade to TLS 1.3. It is vital to financial &gt; institutions and to their customers and regulators that these &gt; institutions be able to maintain both security and regulatory &gt; compliance during and after the transition from TLS 1.2 to TLS 1.3. &gt; &gt; At the current time viable TLS 1.3-compliant solutions to problems &gt; like DLP, NIDS&#x2F;NIPS, PCAP, DDoS mitigation, malware detection, and &gt; monitoring of regulated employee communications appear to be &gt; immature or nonexistent. There are serious cost, scalability, and &gt; security concerns with all of the currently proposed alternatives to &gt; the existing out-of-band TLS decryption architecture: &gt; &gt; - End point monitoring: This technique does not replace the &gt; pervasive network visibility that private enterprises will lose &gt; without the RSA key exchange. Ensuring that every endpoint has a &gt; monitoring agent installed and functioning at all times is vastly &gt; more complex than ensuring that a network traffic inspection &gt; appliance is present and functioning. In the case of monitoring &gt; of supervised employee communications, moving the monitoring &gt; function to the endpoint raises new security concerns focusing on &gt; deliberate circumvention - because in the supervision use case &gt; the threat vector is the possessor of the endpoint. &gt; &gt; - Exporting of ephemeral keys: This solution has scalability and &gt; security problems on large, busy servers where it is not possible &gt; to know ahead of time which session is going to be the important &gt; one. &gt; &gt; - Man-in-the-middle: This solution adds significant latency, key &gt; management complexity, and production risk at each of the needed &gt; monitoring layers. &gt; &gt; Until the critical concerns surrounding enterprise security, &gt; employee supervision, and network troubleshooting are addressed as &gt; effectively as internet MITM and surveillance threats have been, we, &gt; on behalf of our members, are asking the TLS 1.3 Working Group to &gt; delay Last Call until a workable and scalable solution is identified &gt; and vetted, and ultimately adopted into the standard by the TLS 1.3 &gt; Working Group. &gt; &gt; Sincerely, &gt; &gt; Andrew Kennedy &gt; Senior Program Manager, BITS </code></pre> The reply:<p><pre><code> To: BITS Security &lt;BITSSecurity at fsroundtable.org&gt; Subject: Re: [TLS] Industry Concerns about TLS 1.3 From: &quot;Paterson, Kenny&quot; &lt;Kenny.Paterson at rhul.ac.uk&gt; Date: Thu, 22 Sep 2016 19:14:25 +0000 [...] Hi Andrew, My view concerning your request: no. Rationale: We&#x27;re trying to build a more secure internet. Meta-level comment: You&#x27;re a bit late to the party. We&#x27;re metaphorically speaking at the stage of emptying the ash trays and hunting for the not quite empty beer cans. More exactly, we are at draft 15 and RSA key transport disappeared from the spec about a dozen drafts ago. I know the banking industry is usually a bit slow off the mark, but this takes the biscuit. Cheers, Kenny</code></pre>
Ask HN: What should I learn as my first programming language?
I&#x27;m going to be blunt, almost all the comments thus far suggesting a specific language are wrong. This happens everywhere, every time someone asks this question. I would instead pay much more attention to anyone speaking generally about problem solving and learning. I&#x27;ve learned countless programming languages and eventually you&#x27;ll get to the point where learning a new one is just a matter of spending some time with it on a project. You&#x27;re not married to any language ever unless you&#x27;re a terrible programmer, so you can always switch to something else and at most, you&#x27;ve lost time.<p>My suggestion would be to pick a problem, analyze the problem deeper to see if it&#x27;s even worth solving, then proceed from there. That alone is a huge subject, but it&#x27;s really what you need to be comfortable with if your goal is to be a programmer. If you simply want to have a language or two as tools (ex: scientists), then sure, dive right in, but you will forever be stuck at a certain level if you approach things that way unless you are really lucky or really brilliant (most of us are not). Otherwise, take the time on the problem you want to solve, first and foremost, every time. The problem generally (obviously some exceptions) has little to do with language, computers, or anything technical, even if the problem is actually focused on a computing topic. If you still want to solve the problem after doing things like figuring out who&#x2F;what&#x2F;where&#x2F;when&#x2F;why, then try to pick the language(s) and tools that best fit the problem. Unfortunately it is hard to do this without having programmed in a language yet, but you can at least do some basic research, reading, and so on to find the best tool for the job.<p>You will fail in this your first time, and probably your first ten or even 20 times. Most programmers fail in this their whole careers, so do not get discouraged. The best you can do is learn from your mistakes when trying to solve problems with technology, and try to do better the next time. Even if you fail to pick the &quot;best&quot; tool for a job, making a mistake will teach you so much and you&#x27;ll build on this going forward. This also means you can always change languages if you think it won&#x27;t work for your next project or you just want to learn something new. Getting things done, learning patience, solving problems, critical thinking, balancing everything, and countless other non-technical things are really what it&#x27;s all about and learning them will keep you going. Language choice can save you lots of time, lead to better results, and much more, but it often is just a distraction for the average problem.<p>You mentioned you wanted to make a game. I would say 99% of programmers I have ever met say this and very few of them have gone on to make games at all, and even less, professionally. Saying you want to program a game first is like saying your first race will be an ultra-marathon with no training. I am not trying to discourage you, rather I am trying to be real and help you think about what goals you want to set for yourself. For reference, yes, my first program was a game, but this was a long time ago and I&#x27;d say it&#x27;s a miracle I continued.<p>I have worked on games and game engines professionally and would be happy to discuss more things related to the learning process. To avoid being &quot;that guy&quot; who never gives any specifics and is just a contrarian, I will tell you the honest truth as it has been for a long time in game programming -- If you want to be at all decent at game programming, you must learn C and C++, and eventually at least a decent understanding of assembler would really help you. Anyone who tells you otherwise is wasting your time, lying, and&#x2F;or has never written a game (why this is would be a huge post). You can be productive and write good games in any language, but if you are anything like most of us, one day you will want to write a game that is not just a toy, text-based, or a glorified web page. There&#x27;s nothing wrong with those games if that&#x27;s your thing, but I&#x27;ve rarely met anyone who has the long-term goal of writing those games except for spammers, crazy people, or unscrupulous people (ex: Zynga). Again, that&#x27;s not to say your game has to be written in C or C++ (you can write amazing games in C# using Unity for instance), but having command over those languages still teaches you so much of how a computer works, how to optimize games, and will save you when eventually you&#x27;ll want to communicate with things written in those languages and not treat them like a magic black-box. For the goal of being a game programmer, it&#x27;s a good use of time even if you never write anything professionally in C or C++. It&#x27;s less about the mechanics of C or C++ and more about learning how things work at lower-levels and expanding the size of your &quot;safe&quot; zone mentally. Likewise, you&#x27;ll want to learn quite a bit of math for most game programming, be in matrices, algebra, geometry, calculus, diff eq, etc. So you may not even want to learn C or C++ as your first language, and that&#x27;s fine, just know that long-term if you are serious about games, you will have to do that. The message is that you need to focus on computer science skills, not language specifics in order to work towards that goal, even when working in other languages. In the process of doing this, you will learn a lot more about what I said earlier - picking the right tool for the job, which means not just languages, but libraries, editors, formats, algorithms, etc.<p>Building a game, at least remotely well, requires quite a lot of skills that are far beyond a new programmer&#x27;s abilities and it is nearly impossible to master them all in a lifetime. You need to learn the basics first and to understand your limitations and resources - you won&#x27;t be building a AAA or even a nice looking indie game for a long time unless you are truly exceptional, lucky, or have a team of people doing it mostly for you. At best, you&#x27;ll produce something that on the surface works, but your programming ability will suffer and if you eventually do work with competent people, they will think you are terrible or hate you. That said, building a game will require you to learn so many areas and thus teach you more than almost anything the best way - hands on. It&#x27;s a balance of getting things done and doing them right, and knowing where, when, and how to cross that line is vital. When you first start, you won&#x27;t even likely have any ability to filter out all the information out there that is wrong, will waste your time, is bad design&#x2F;BS&#x2F;awful. Some programmers never learn this, but this too is hugely important.<p>I strongly advise you to see if programming is something you even want to do first, figure out what you like to do, where are your deficiencies, and work from there to figure out what your role would be in game dev, where you need help, how to judge others, etc. A game can be a motivator to learn more and teach you things along the way, but it can also drive you away, overwhelm you, teach you bad habits that you will never shake, and generally lead you astray. As such, just writing simpler things that solve real problems might get you there faster and sharpen your skills better along the way, while ensuring you don&#x27;t throw up your hands. You might be different, but that&#x27;s the pattern I&#x27;ve seen in most people. Pick a reasonable, small problem you care about, match the tools, go from there, repeat, potentially with new languages after you get comfortable programming for awhile.
Would You Take an 8% Pay Cut to Work from Home?
Absolutely not. I&#x27;ve advised everyone looking for remote work to <i>never</i> undercut their salary or treat the remote job as a &quot;perk&quot;[0].<p>There are downsides to working at home -- disconnection from the team (if the team is not entirely remote), additional pressure to be deliberate about social behavior, the elimination of the lines between &quot;work&quot; and &quot;home&quot; and having to keep yourself in check to make sure you don&#x27;t end up just &quot;working all of the time&quot; and the perception that working from home <i>isn&#x27;t working at all</i> which is often the case when an employer thinks that they can reduce a person&#x27;s salary if they offer them the opportunity to work from home.<p>The problem with accepting a lower salary is that you&#x27;re opening yourself up to a working environment where the bosses <i>believe</i> they&#x27;re doing you a favor and will expect more in return than just that salary. There&#x27;s a risk that they will respect the work you do <i>less</i> or assume you&#x27;re <i>on a perpetual vacation</i> because you&#x27;re working from a home office rather than a building they&#x27;ve paid for. Your work and contribution should speak for itself and your management should understand that it takes a special skill (and valuable) skill set to work from home successfully. By treating it as a perk, these facts are ignored.<p>After 7 years of working in an office, 4 of working split between home and office and 7 working exclusively from home, I can say from my own, anecdotal, experience that working from home makes me <i>far more effective</i> at the job I&#x27;m tasked with. To start, switching to working at home eliminated a 2-hour round-trip commute. That time didn&#x27;t all get shifted to work, but at least half of it did. Because I work from home, I have the most effective tools for doing my job with me at all times. When I was in the office, I had two workstations that I did most of my work on and a laptop that I rarely used except when at home. Working in development, I tended to have moments of inspiration hit at non-work times (9:00 PM) and when that&#x27;d happen, I&#x27;d look at the laptop and run through in my mind all of the things I&#x27;d have to set up to be able to take advantage of that inspiration and would often think &quot;yeah, I&#x27;ll just wait until the morning&quot;. And by morning, that inspiration had expired. Because I do my entire job at home, I can take advantage of this inspiration without any friction, now. I&#x27;m currently employed by a team that is entirely located in the UK. One of the reasons I was hired was because I&#x27;m located in a time-zone that matches many of our customers. This benefits my employer which makes a pay-cut inappropriate.<p>In the office, you&#x27;ve got an unofficial social pressure to keep the same schedule as your coworkers: 8-5 with a noonish lunch, usually. There are several &quot;personal things&quot; that can&#x27;t be done over the internet still, today[1]. These places often have a day or two a week to accommodate day-job folks, but you&#x27;ll be waiting in line with <i>all of the others</i> about five times longer than you would if you picked a more ideal time[2].<p>All of this equates to <i>more time available to work</i> and I probably work an average of 50 hours&#x2F;week without even trying. The difference is that at home it <i>feels</i> like a 40-hour work week and sometimes it feels like far less than that. To my <i>family</i> it feels like a <i>much</i> shorter work-week because I&#x27;m home for them. I can sit in the living room and work while my kids play. I can be there when the little things come up. My kids will tell you &quot;daddy&#x27;s always there&quot; because ... well ... I am! They can even sit in on a conference call and get to know what it&#x27;s like to have a career.<p>This is not to say that there aren&#x27;t downsides for the employer. There&#x27;s less collaboration[3] and collaboration intensive work may suffer as a result, especially if the work-from-home employees don&#x27;t have the required skill set. It&#x27;s also too easy to focus on those downsides. It&#x27;s better to look at it as a trade-off. Less facilities costs, more work time, greater &quot;focused time&quot;, and (probably) happier employees. Employee happiness can equate to better work product and there are few things that will have a bigger impact on employee happiness than work-time&#x2F;location flexibility provided it&#x27;s managed properly. Management <i>has to be better</i>, too. No longer can you rely on &quot;butts-in-chairs&quot; management where a person&#x27;s contribution is determined solely on how frequently you peer in and see that person working. Effectiveness has to be measured by project completion and more concrete variables.<p>[0] There&#x27;s one case where this doesn&#x27;t fit - cost of living. If you&#x27;re taking a job in New York and you live in South Dakota, I wouldn&#x27;t expect to be making the same amount of money as someone who is required to find a place to live in NYC and accept the accompanying expenses. This is one of the biggest perks to <i>accepting</i> remote employees -- the ability to hire in a cheaper part of the country from where &quot;the office&quot; resides and comes with much of the same benefits (and fewer of the downsides) of off-shoring to a cheaper-labour country. I also work in software development which I believe is one of the (many) jobs that can be done remotely with little sacrifice on the side of the employer and many benefits for both employer and employee. It probably doesn&#x27;t fit similarly for many other jobs.<p>[1] Some driver&#x27;s license related things, dropping off equipment at Comcast&#x2F;Verizon&#x27;s hell-hole, almost anything to do with a mortgage, that missed package that needed to be signed for requiring you to go to the Post Office, UPS or FedEx, doctor&#x27;s office or lab work needing to be done and on and on.<p>[2] I&#x27;ve learned 1:45-2:30 is the sweet spot - the bank teller has no lines, the mortgage broker has nobody in their chairs, the salon will let you walk in without an appointment and see you immediately and even the DMV (Secretary of State in MI) will have about 5 people in front of you. Service is also better (particularly at the DMV) because the staff isn&#x27;t stressed -- often having just returned from lunch.<p>[3] It&#x27;s hard to argue that you&#x27;d get the same level of collaboration having entirely remote staff than you do when there&#x27;s a shared, common, office. But given the right tools, collaboration doesn&#x27;t have to be a problem. There&#x27;s also the issue of over-collaboration. There are times when <i>I need to keep my head down and be able to work on complex code without interruption</i>. When you have a shared office -- especially some of these &quot;Agile-type&quot; offices where there&#x27;s no separation between employees -- the office culture can become one of interruptions.
My Self-Hosted Life
FTFA:<p>For those that know me, I’ve made no secret of the fact that I believe that you are better off doing something yourself than outsourcing the task to someone else, especially in areas that you are interested in or have some expertise. For me this has particular value in the case of my computing. As a result, I have taken the decision to self-host as much of my online services as possible, rather than relying on the cloud (since that’s just someone else’s computer). I’ve been working on this for years (actually the whole time this blog has been dark and before) and at this stage I’m mostly there: almost all of my digital life is provided by Open Source software, running under my control.<p>This post will detail what I’m using and how it all fits together. I’m not going to go into technical specifics since otherwise this post would be huge, perhaps I’ll focus on some of that in future posts (feel free to make requests in the comments). Also, please note that my setup is by no means finished and probably never will be, it’s an ongoing project and it has become pretty much my main hobby to install and maintain this stuff. In the Cloud<p>I’m going to start right here, with this blog, since that was where the whole thing really started. This blog existed well before my undertaking to self-host. In the early days it lived on a shared hosting plan provided by Dreamhost. The site has always run WordPress, although I’ve toyed with the idea of moving to a static site over the years, I’ve just never quite managed it. In 2011 I moved the site to a shiny new VPS provided by Linode, where it has lived ever since. There is also a Piwik install for tracking website stats (which I’ve blogged about before).<p>The main motivation behind the VPS was to install and configure my own mail server setup, something which I ranted about shortly after. This setup has be serving myself and various family members well since then, with really very little maintenance on my part (almost everything is automated).<p>There have been various other uses for the VPS over time, many of which haven’t stuck. Probably the most successful has been an installation of TT-RSS, which started life on my home server and at some point moved to the VPS for convenience of access. I’ve also dabbled with various chat applications, mainly XMPP based, but they’ve never really been that useful due to the network effect of no-one else using them! At this stage email has become my primary form of communication.<p>You might say that this is a bit of a cop out, since this all runs on a virtual machine, which itself runs on someone else’s computer. I would agree, however it’s a nice middle ground between going all out with your own servers and running everything in the cloud. To me the reality that the VPS is in the cloud is obscured by the ability to control every detail of its running software. Its also pretty nice for services which I want to be reliable, since Linode almost never skips a beat. At Home<p>So the VPS is one thing and is really used for critical services or stuff that needs to be accessible to the wider Internet (like this site), but the real magic happens on my home servers (yes, there is more than one). My main server (now on its second hardware iteration) started life as a MythTV system and still does a great job in this respect. Many other services have been added over time, such as an MQTT broker (mosquitto), git server (gitolite+gitweb), a calendar&#x2F;contacts server (Radicale) and file synchronisation (Syncthing). At some point I also switched out the MythTV frontend and replaced it with XBMC (now Kodi).<p>In the last couple of years I’ve been moving further down the home automation route, rather than just sensing and logging via MQTT. I’ve finally settled on Home Assistant as my automation controller and UI, along with an instance of Node-RED to do some miscellaneous processing. This all runs on the main server, with a Raspberry Pi 2 in the garage functioning as what I like to call ‘the gateway’ (it has a couple of radios and some sensors connected and runs another instance of Node-RED to shuttle this data to MQTT). In addition I have my home CCTV set up using a couple of webcams and MotionEye. One of the cameras is located remotely and connected to another Raspberry Pi (this time an old model B) and streams back to the main server with mjpg-streamer.<p>I also run a pfsense based firewall to protect my network and provide remote VPN access. This runs on an old netbook with an extra USB ethernet adapter. The internal network is partitioned using VLANs to provide a separate firewalled subnet for the home automation gear, some of which is cheap Chinese stuff which needs to be forcibly prevented from talking to the cloud. The networking gear consists of two TP-Link routers, flashed with OpenWRT which provides nice VLAN support. These have been configured to just provide switching and wireless access points and delegate all the firewalling, DNS and DHCP stuff to the firewall.<p>Within the last year or so I’ve been working on streamlining the management of all of this. The principle focus of this has been monitoring all the services I’ve got running. For this I’ve settled on Nagios, which I run in a separate VM hosted on the main home server. Although complex to set up, I can’t talk highly enough of Nagios, it’s brilliant and it saves me so much time just by knowing what is going on on my network. Email notifications from Nagios of course go via my own mail server! I’ve also played around with collectd, InfluxDB and Grafana for performance graphing, although I’ve yet to deploy this to everything. Conclusion and The Future<p>So that was a probably non-exhaustive list of my self-hosting activities. I’m sure I’ve probably forgotten many things and of course there are the huge amounts of supporting software that I haven’t mentioned. As I said, I’m now at the stage where this meets almost all my computing needs although there are a few areas where I want to improve.<p>The main thing is automating and persisting my configuration, since I’m still mostly doing things manually. For this I’ve settled on a combination of Ansible and Docker. I’ve played extensively with both but haven’t really made much progress with deploying them for much more than testing purposes.<p>I’m also constantly evaluating new software to fill gaps in my ecosystem. I’m currently looking at Rocket.Chat and Hubot to provide a chat based interface for remote administration, but don’t have a usable system yet. I’m also toying with the idea of a Gitlab server to replace the gitolite+gitweb system and to utilise the CI in my automation strategy, but I’ve heard it requires a bit in terms of resources (incidently gitlab.com is really the only 3rd party service I heavily use).<p>That I am able to do this at all is a testament to the power of Free and Open Source software and cheap commodity hardware. I find it pretty awesome to think that almost every interaction I have online utilises my own infrastructure and that it works tirelessly for me 24&#x2F;7.<p>I’m only just getting started documenting my setup here, for instance this post hasn’t touched on any of the client applications I use on my phone and desktop machines. I’m also going to do some more technical posts on various aspects as time goes on, so please stay tuned (or even subscribe to the RSS feed or mailing list!).
Experts said Arctic sea ice would melt entirely – they were wrong
Something seems a bit off in this reporting. From this 2016 article at The Telegraph:<p>&gt; Dire predictions that the Arctic would be devoid of sea ice by September this year have proven to be unfounded after latest satellite images showed there is far more now than in 2012.<p>&gt; Scientists such as Prof Peter Wadhams, of Cambridge University, and Prof Wieslaw Maslowski, of the Naval Postgraduate School in Monterey, California, have regularly forecast the loss of ice by 2016, which has been widely reported by the BBC and other media outlets.<p>Yet from a 2009 article at The Telegraph, at <a href="http:&#x2F;&#x2F;www.telegraph.co.uk&#x2F;news&#x2F;earth&#x2F;copenhagen-climate-change-confe&#x2F;6815470&#x2F;Copenhagen-climate-summit-Al-Gore-condemned-over-Arctic-ice-melting-prediction.html" rel="nofollow">http:&#x2F;&#x2F;www.telegraph.co.uk&#x2F;news&#x2F;earth&#x2F;copenhagen-climate-cha...</a> :<p>&gt; Speaking at the Copenhagen climate change summit, Mr Gore said new computer modelling suggests there is a 75 per cent chance of the entire polar ice cap melting during the summertime by 2014.<p>&gt; However, he faced embarrassment last night after Dr Wieslav Maslowski, the climatologist whose work the prediction was based on, refuted his claims.<p>&gt; Dr Maslowski, of the Naval Postgraduate School in Monterey, California, told The Times: “It’s unclear to me how this figure was arrived at.<p>&gt; “I would never try to estimate likelihood at anything as exact as this.” ...<p>&gt; Dr Maslowki said that his latest results give a six-year projection for the melting of 80 per cent of the ice, but he said he expects some ice to remain beyond 2020.<p>&gt; He added: “I was very explicit that we were talking about near-ice-free conditions and not completely ice-free conditions in the northern ocean.”<p>Of course, that was is 2009. Going back to this 2016 article:<p>&gt; The view was supported by Prof Maslowski, who in 2013 published a paper in the Annual Review of Earth and Planetary Sciences also claiming that the Arctic would be ice-free by 2016, plus or minus three years.<p>Yet the 2012 Maslowski paper - yes, 2012, not 2013 - doi:10.1146&#x2F;annurev-earth-042711-105345 says (emphasis mine):<p>&gt; Given the estimated trend and the volume estimate for October–November of 2007 at less than 9,000 km^3 (Kwok et al. 2009), one can project that at this rate it would take only 9 more years or until 2016 ± 3 years to reach a nearly ice-free Arctic Ocean in summer. Regardless of high uncertainty associated with such an estimate, it does provide <i>a lower bound of the time range for projections of seasonal sea ice cover</i><p>This isn&#x27;t quite a prediction that the &quot;Arctic would be ice-free by 2016, plus or minus three years&quot;, though I think the difference is more a quibble than anything else.<p>In a BBC article from 2011 ( <a href="http:&#x2F;&#x2F;www.bbc.com&#x2F;news&#x2F;science-environment-13002706" rel="nofollow">http:&#x2F;&#x2F;www.bbc.com&#x2F;news&#x2F;science-environment-13002706</a> ), it sounds like Maslowski&#x27;s prediction for ice loss is the one of the earliest in the Arctic ice modelling field, and notable for being so early:<p>&gt; The original prediction, made in 2007, gained Wieslaw Maslowski&#x27;s team a deal of criticism from some of their peers. ...<p>&gt; &quot;[Maslowski&#x27;s] is quite a good model, one thing it has is really high resolution, it can capture details that are lost in global climate models,&quot; he said.<p>&gt; &quot;But 2019 is only eight years away; there&#x27;s been modelling showing that [likely dates are around] 2040&#x2F;50, and I&#x27;d still lean towards that.<p>&gt; &quot;I&#x27;d be very surprised if it&#x27;s 2013 - I wouldn&#x27;t be totally surprised if it&#x27;s 2019.&quot;<p>So while &quot;experts said&quot; is true, it wasn&#x27;t &quot;most of the experts&quot;. Even this Telegraph piece quotes an expert who says there are experts &quot;“who have the summer sea ice remaining until late this century, which is quite impossible.”&quot; It&#x27;s as if there was a wide range of uncertainty.<p>I was also thrown by the comment:<p>&gt; It is the latest example of experts making alarming predictions which do not come to pass. Earlier this week environmentalists were accused of misleading the public about the &quot;Great Pacific Garbage Patch&quot; after aerial shots proved there was no &quot;island of rubbish&quot; in the middle of the ocean. Likewise, warnings that the hole in the ozone layer would never close were debunked in June.<p>Who said there was an &#x27;island of rubbish&#x27;? The popular articles I&#x27;ve read have all used phrases like Wikipedia article from <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Great_Pacific_garbage_patch" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Great_Pacific_garbage_patch</a> :<p>&gt; The patch is characterized by exceptionally high relative concentrations of pelagic plastics, chemical sludge and other debris that have been trapped by the currents of the North Pacific Gyre.[2] Because of its large area, it is of very low density (4 particles per cubic meter), and therefore not visible from satellite photography, nor even necessarily to casual boaters or divers in the area. It consists primarily of a small increase in suspended, often microscopic, particles in the upper water column.<p>It feels like saying the Sargasso Sea is actually not a floating ship graveyard of ships caught in the seaweed, when that&#x27;s never been the case.<p>Similarly, there wasn&#x27;t a general belief that the hole in the ozone layer would never close. Quoting now from <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Ozone_depletion" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Ozone_depletion</a> :<p>&gt; The 2010 report found, &quot;Over the past decade, global ozone and ozone in the Arctic and Antarctic regions is no longer decreasing but is not yet increasing. The ozone layer outside the Polar regions is projected to recover to its pre-1980 levels some time before the middle of this century. In contrast, the springtime ozone hole over the Antarctic is expected to recover much later.&quot;<p>Who was warning in 2016 that the hole would never close?
Virtualize OS X on Linux
Bringing machine &#x27;default&#x27; up with &#x27;virtualbox&#x27; provider... ==&gt; default: Box &#x27;AndrewDryga&#x2F;vagrant-box-osx&#x27; could not be found. Attempting to find and install... default: Box Provider: virtualbox default: Box Version: &gt;= 0 ==&gt; default: Loading metadata for box &#x27;AndrewDryga&#x2F;vagrant-box-osx&#x27; default: URL: <a href="https:&#x2F;&#x2F;atlas.hashicorp.com&#x2F;AndrewDryga&#x2F;vagrant-box-osx" rel="nofollow">https:&#x2F;&#x2F;atlas.hashicorp.com&#x2F;AndrewDryga&#x2F;vagrant-box-osx</a> ==&gt; default: Adding box &#x27;AndrewDryga&#x2F;vagrant-box-osx&#x27; (v0.2.1) for provider: virtualbox default: Downloading: <a href="https:&#x2F;&#x2F;atlas.hashicorp.com&#x2F;AndrewDryga&#x2F;boxes&#x2F;vagrant-box-osx&#x2F;versions&#x2F;0.2.1&#x2F;providers&#x2F;virtualbox.box" rel="nofollow">https:&#x2F;&#x2F;atlas.hashicorp.com&#x2F;AndrewDryga&#x2F;boxes&#x2F;vagrant-box-os...</a> An error occurred while downloading the remote file. The error message, if any, is reproduced below. Please fix this error and try again.<p>The requested URL returned error: 500 Internal Server Error
Maths becomes biology's magic number
I&#x27;m currently a medical student, I double majored in mathematics and biochemistry, and fell in love with programming and computer science after doing a physics research internship during undergrad (living with an MIT computer scientist helped too haha).<p>I agree largely with @GarrisonPrime&#x27;s thoughts regarding the systematic nature of biological systems. I also share the same sense of trying to understand the logic and intuition of the physiology I learned in the first two years of medical school while many of my peers were just trying to guzzle and regurgitate. I have to admit that I also fell into that mode as well at times, just due to the volume of material that was expected. But that&#x27;s for another time.<p>I&#x27;m currently taking a year to work on research that is in systems biology&#x2F;bioinformatics. While there are many things that I like about it, and I&#x27;m grateful that it presents an opportunity for me to continue learning about computer science, machine learning (been really getting into learning about Bayesian analysis this year) and biology I have to admit that this article sounds like it was written from the same vantage point that I stood on a couple years ago as I just started getting into this area of research.<p>The technology we have today to probe cellular systems is amazing and was literally the stuff of science fiction some 20 years ago, but it&#x27;s not without its faults. This line from the article especially rang true to how I feel these days:<p>&quot;But there&#x27;s a problem. The vast data sets that give bioinformatics its power are also its Achilles heel.&quot;<p>The problem is that the systems biologists and bioinformaticists are most interested in are dynamic with complex regulatory systems that we don&#x27;t have ways of measuring and most methods of measurement either completely destroy the system or alter its dynamics. In addition, it&#x27;s akin to taking a snapshot of how the system is behaving at one instance in time or condition. Yet many times we are asked to use that information in a way that&#x27;s akin to trying to describe the dynamics of an entire motion picture from 2 or 3 photos. And those photos are greyscale. Take for example mRNA-sequencing, a type of data that I work with frequently. It&#x27;s trying to measure the amount of gene product that a cell or cells have at one point in time (basically trying to get a measure of how much geneX the cell is trying to produce). While it is an interesting measure and can give some insight into how the the cell may be adapting to different conditions, those measurements alone tell us almost nothing about the regulation behind those differences, which is the thing we really want to understand. It&#x27;s a bit like seeing oil on top of water and then trying to infer the complex dynamics of geophysics that are occurring on the ocean floor. Not saying that it&#x27;s not useful at all, and can help direct your attention to the next interesting thing, but I think that many people overestimate how informative the data is. And then there are still a lot of technical issues but that is a discussion for another day.<p>The other main point I want to make is that for all the data you think we have now about these biological systems, it&#x27;s like a snowflake on top of the iceberg. Even many of these large consortium projects like ENCODE have relatively small amounts of information if you want to learn about some transcription factor or cell type that isn&#x27;t one of the top 10 most well known or studied. And how many of those datasets out there are really lacking in good quality control, and then there is the politics of sharing data in an academic&#x2F;research environment that is so competitive getting a job (that you will have to continue to work like a madman&#x2F;woman at) is like winning the lottery.<p>OK, I don&#x27;t want this to descend into a full blown rant. Main points - it&#x27;s still really exciting, and it&#x27;s a great time to have intersecting interests in medicine, math and computer science. Just that the tech we have to work with right now is still a bit nascent and expensive. I think there will be a point where systems bio and machine learning will revolutionize how we understand biology. We&#x27;re just not quite there yet.<p>On a side note - where I do see a lot of potential right now where computer science and machine learning can start to make an impact is more on the clinical side of medicine and using ML to learn from the vast stores of EMR data. But that&#x27;s also another discussion for another day. If you read this far, here is a smiley face, and have a nice weekend :)
The Ops Identity Crisis
This was a good read. As a DevOps Engineer at a company that does not have a distinct Ops department, who&#x27;s also a Tech Lead, I have some thoughts I&#x27;d like to share.<p>First, while the ultimate goal of any engineer (even not among the Ops disciplines) should be to automate yourself out of a job, we have seen time and again that it is impossible to do so, as any good engineer will continue to advance the state of the art. Conclusively, there is no &quot;finish line&quot; for operations that will not be obsolete within 3 years. The concern that you&#x27;ll just have to migrate across organizations, reaching the &quot;finish line,&quot; rinsing and repeating is a non-issue. The notion of a finish line is really sugarcoated FUD. (The author&#x27;s interesting thought experiment alludes to this and does refute the argument, so hopefully my statements simply complement the article in that regard. I call it a thought experiment because we will never arrive at this &quot;you automated everything&quot; goal.)<p>I absolutely agree with the author that we <i>do not really need ops engineers.</i> We <i>do</i>, however, need <i>specific disciplines of software engineering</i>. Specifically, I recommend The Systems Engineering Side of Site Reliability Engineering[1] (as well as the book), Site Reliability Engineering[2] from Google. The usenix article in particular describes three distinct disciplines of software development: systems engineering, site reliability engineering and software engineering. The <i>individuals</i> behind the roles have little to do with the roles themselves (rather, the causal chain is the other way around); it&#x27;s often misunderstood, for whatever reason, that software engineers and operations engineers have different skillsets because of who they are. This is true, but it does not mean that a software engineer cannot, in short order relative to individuals with no software background whatsoever, transition into an operations role, or vice-versa. Orthogonally, identifying individuals with the skills in any of these three disciplines is critical to placing them in work that is personally and professionally rewarding, as well as more valuable to the organization than if they were placed in some other discipline. And sometimes, individuals do not even know of these disciplines or, for whatever reason, think they are suited for a discipline that they are not actually best at. I was one of these people (a software engineer before moving into operations). In essence, what I&#x27;m trying to demonstrate here is that these disciplines of software development are permanent (or have generations-length longevity) and we should not be concerned with being replaced or becoming obsolete. Indeed, it is the <i>specific tasks</i> that will change over time. Consider, for example, electrical engineers. We do not anticipate that EE&#x27;s will be replaced by robots. Despite robots automating the process of manufacturing circuits, EE&#x27;s will always be invaluable and irreplaceable. However, their specific responsibilities will change over time. This is why I said before, advancing the state of the art results in new work (or even types of work) to exist. Finally -- and this is just a bonus -- any experience acquired by, say, an EE will be useful even if he or she transitions to a new discipline of engineering. In my experience, the best software engineers I ever known have understood in remarkable depth CPU architecture, memory models, networking protocols, configuration management, etc.<p>&gt; there is practically no difference between a software engineer and an operations engineer - they both build and run their own systems - other than the domain: the software engineer builds and runs the top-level applications, while the (ex-)operations engineer builds and runs the infrastructure and tooling underneath the applications.<p>The above statement from the article&#x27;s thought experiment vaguely describes two (of the three) software engineering disciplines that Hixson[1] talks about. Operations engineers and software engineers alike are, in this thought experiment, responsible for leveraging their expertise and talent (understand that I use the word talent according to the definition described by Hixson) at maximum efficiency. The manifestation of these disciplines is reflected in their <i>domain</i>, but the individual tasks themselves are only relevant today and will change tomorrow. The third discipline not described here (systems engineering) is very much relevant and deals specifically with the interactions among systems, which neither operations nor software engineers will focus on (or necessarily have significant talent in). Later in the article, the author sort of blends SRE (site reliability engineers) and SE (systems engineers) together. The distinction isn&#x27;t important for the author to make her point, but I wanted to highlight it a little bit.<p>Second, I think the author describes an environment that strongly reflects the ideals of the DevOps movement. From my reading, I&#x27;m inferring that the author is aligned with these ideals. I consider this a big selling point if I ever wanted to consider Uber as a place of employment. As some other comments here on HN have noted: it is extremely rare and difficult to find an organization that has embraced DevOps principles with such purity. I&#x27;m fortunate to be employed at one of them (not Uber), and it sounds like Uber has made some good decisions as an organization in this regard. (Hopefully this paragraph can be to the benefit of any employment-seeking operations engineers. The statements in the article reflect positively on Uber, particularly if you are trying to move from a traditional operations role to a DevOps&#x2F;SE&#x2F;SRE role.)<p>Third, the article does a great job refuting the 3 identified arguments. In general, I can&#x27;t agree more! The author takes the time to consider the merit of each argument and qualify the conditions under which they are true before refuting them, which makes it much easier to read coming from a more traditional organization. From my biased perspective, I don&#x27;t even give these arguments the light of day and refute them without thinking twice about the qualifications that can alter their accuracy; so, one takeaway for me from the article has been to <i>not</i> make the assumption that these arguments are being made by like-minded individuals. It&#x27;s quite likely that I&#x27;m too hard on people for bringing up concerns like these and, as a result, not open to new (old) ideas.<p>My final thought on the article is that, while it&#x27;s not really news in most of the social circles I spend my time with (as a byproduct of having learned much of what I know from a stellar colleagues in a great work environment -- not because of any personal accomplishment), I really appreciate that the author took the time to write out these thoughts and publish them so that the broader software community can grow and adopt ideals that move our industry forward in a very positive, very significant way. So thanks to the author, and to aberoham who posted the link here on HN!<p>[1] <a href="https:&#x2F;&#x2F;www.usenix.org&#x2F;publications&#x2F;login&#x2F;june15&#x2F;hixson" rel="nofollow">https:&#x2F;&#x2F;www.usenix.org&#x2F;publications&#x2F;login&#x2F;june15&#x2F;hixson</a><p>[2] <a href="https:&#x2F;&#x2F;landing.google.com&#x2F;sre&#x2F;book.html" rel="nofollow">https:&#x2F;&#x2F;landing.google.com&#x2F;sre&#x2F;book.html</a>
Ask HN: Why do you like programming?
When I was younger, I taught myself how to program in Visual Basic 3.0. I had tried to learn, but I just wasn&#x27;t getting it. I would open it up and just stare at the screen for hours, clicking and messing around with everything. One night, I went to sleep, had a dream about the code, woke up, hopped on the computer, and wrote my first program. Sure, it was a program like a random 8-ball generator. You asked it a question and you got an answer, but it was the start of something.<p>Anyways, when I turned about 19 years old, I lost interest completely and left the programming world. I became interested in psychology and studying people and that is what I went to school for and loved it, hoping to help people understand their purpose and life, find meaning in life, and make money doing it.<p>After I graduated, I went to English in another country for a year, and after my visa was up, I returned home, broke, just having barely been able to pay for my overweight bag. I remember feeling bad when my brother asked me to go to a movie and I couldn&#x27;t even afford the $9 it cost to see.<p>Anyways, I applied for jobs across the board on Craigslist, many jobs that required a psych degree but not a single one replied. I decided I would go into an area where I knew I could do it, but I didn&#x27;t really have much confidence, though I would try anyway: programming.<p>I ended up getting a job fixing bugs and enhancing autobody shop software for a tyrant who took advantage of my inexperience, paid me a salary that I really couldn&#x27;t live on (I moved back home for a while) and I put up with it for about a year and a half, before acquiring all the knowledge and confidence I needed to apply for other jobs. I will say that while I may not have gone to school for programming, I was certainly happy to learn what I did -- and programmers use psychology everyday in UI&#x2F;UX design.<p>I was also freelancing on the side and applied to several web developer jobs, having gained real interest in that field. It took a few tries, but two companies called me back, and luckily, one required I worked during the day, while the other required I worked at home at night. It worked out great and allowed me to pay off my student loans.<p>You can read more about my experience with that job here: <a href="http:&#x2F;&#x2F;www.confessionsoftheprofessions.com&#x2F;the-opportunity&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.confessionsoftheprofessions.com&#x2F;the-opportunity&#x2F;</a><p>From my love of psychology, I created <a href="http:&#x2F;&#x2F;www.confessionsoftheprofessions.com" rel="nofollow">http:&#x2F;&#x2F;www.confessionsoftheprofessions.com</a> to still go after my dream of helping people discover their purpose -- though I found a less formal way to do it, and although I make money from the ads, I certainly could never quit my day job.<p>There is something about programming though that I never thought I would love, wanting nothing to do with computers ever again when I left programming, but realizing that it was just something in my blood. I love being creative and I love working on user interfaces, making software or design more user-friendly. I love troubleshooting and tinkering and figuring out why it works or why it doesn&#x27;t work.<p>I am always learning, always thinking of something new, always freelancing, and always working on side projects. I ended up creating <a href="https:&#x2F;&#x2F;mypost.io" rel="nofollow">https:&#x2F;&#x2F;mypost.io</a> which is a content creation platform that allows anyone to have a webpage up on the Internet in seconds -- no registration or email account required. This project taught me a lot about how PHP and databases work. Before MyPost, I knew nothing about databases, but by the time I was finished, I completely understood how information and databases worked. The more rewarding experience from MyPost is the fact that even without any advertising at all, it is used by people around the world.<p>MyPost also taught me the great importance of UI and UX. I used my sister for beta testing to see how hard it was to use. If my sister had any questions or doubts about a feature, I revised it and made the website easier to use. If she got stuck on something, I changed it, and we kept doing that until she could create a post in seconds without questioning &quot;how to&quot;.<p>I am now on to another side project that has to deal with affordable communication services for both individual and business. I had to learn even more than I did, as this will be the very first project where I am working with subscription-based pricing and recurring charges, using Stripe. Always something going on in and something new to learn in programming! I love seeing the results of my work and using things built by my own hands, but I especially love it when other people find it as useful as I do. That is what keeps me in the programming world!
Ask HN: To those who became fluent in a second language, what did you do?
Short Answer: I tried to talk to people everyday and took lots of notes. It took 3 years before I was reasonably fluent and started teaching the language to other foreigners.<p>Long Answer: Here&#x27;s how I learned to speak Thai as my 2nd language.<p>I moved to Thailand in 2003. I had very little experience with any type of successful language learning. I had 2 years of high school Spanish which means I could ask where the bathroom is and knew the difference between &quot;dog&quot; and &quot;but.&quot; I didn&#x27;t know any Thai when I got on the plane to fly there. I bought a phrasebook the day I left and tried to learn the numbers and a few basic phrases on the flight.<p>I improved a tiny bit each day. Every day people told me how great my Thai was. It&#x27;s a nice motivator for a while. I made some half-hearted attempts to learn the script, but I never really understood how the tone system worked nor did I have any clue that there were multiple ways to say a P sound and that this was actually crucial to being able to speak well. After 1 month, I did a visa run to Laos where I used my &quot;awesome&quot; Thai to help some other travellers haggle prices and kept us from getting ripped off and also just helped connect with regular people who couldn&#x27;t speak English.<p>The next major event was a couple of months later when I got a job and ended up moving to another city halfway between Bangkok and Chiang Mai. I left Chiang Mai thinking I was pretty fluent in Thai, but I quickly discovered I was very wrong. In the new city, I struggled to understand what anybody was saying and I was having to repeat myself all the time wondering what was wrong with everybody. I eventually accepted the fact that it was me rather than everyone else which was the problem. I also had no idea how to fix it. The people I worked with tried to be helpful, but they couldn’t really explain what I was doing wrong.<p>At the end of the year, I moved back to Chiang Mai and started studying at an university there in order to get a visa and take some Japanese classes. While studying at a Thai friend’s house for final exams, I noticed a large black book about Thai language that I had never seen or heard of before. I believe it was written in the 50s for ambassadors and other foreigners who were in Thailand and included things like how to talk to their servants as well as sample writings from both educated and uneducated people. I photocopied the book and went through it over the coming weeks. The book broke down the entire sound script, sound system and tone rules. It wasn’t well laid out and I could write a mile long list of things I didn’t like about it, but it was the first comprehensive explanation I had come across.<p>I spent about 6 weeks studying and drilling all the exercises and eventually remapped them so they were more useful. I have almost no attention span so I would do very short study sessions a few times a day. A session wouldn’t usually last more than 5 minutes and might be as short as 1-2 minutes. I remember the day when everything clicked. Words that I used to mix up because I thought they sounded similar were suddenly so clearly different that I couldn’t believe I had ever mixed them up in the first place. I knew the tone system so well that when I pronounced something with the wrong tone, alarm bells would go off in my head “That can’t be a low tone because it’s spelled with a low class consonant!” and I would instantly self-correct.<p>This all happened very fast and my fluency level just skyrocketed at this point because once I learned how to say things correctly, it became easy to hear them when others spoke. I still wasn’t fluent, I still made mistakes, but I gradually fixed any pronunciation mistakes I had been making. Every single effort I made to learn Thai up until that point was always severely hindered by the fact that I did not have a solid grasp of the sound system.<p>Once I was comfortable with the script, if I encountered a new word, I could just jot it down in a little notepad I always carried and I could look it up later. Even if I didn’t know exactly how to spell something, I could at least write it out phonetically and then ask people what the correct spelling was. If I tried to say something and I got a weird reaction, I’d write down what I said and ask 4-5 of Thai speakers (never ask just 1!) how I should have said it. That meant that next time it came up, even if I had forgotten the new sentence, I could look it up because it was in my pocket. I’d fill up one of those notebooks every month or 2 and I believe I retained about half of whatever went inside them.<p>I believe that there are a few key steps that you can follow to make significant progress in the first 1-3 months of learning a language: Getting Started with a New Language:<p>Step 1) Learn the sound system<p>Actually spend a few days learning how to say the sounds properly. How much time will depend on the language This can go a lot faster if you find someone who can explain things like where exactly your tongue is supposed to be to say a particular sound. Non-native speakers who speak a language really well tend to be more helpful than native speakers for this.<p>Step 2)Learn short, high-frequency sentences and&#x2F;or dialogues<p>Make or find a list of sentences&#x2F;dialoges and learn to say them correctly and as you get more comfortable. Start slow and practice saying them faster until you can call them up automatically. When you have 20 or 25 of the right sentences, you will then have enough ammunition to start faking a conversation.<p>Step 3) Talk to people as often as possible Even if you have just 2 or 3 sentences, you can already go out there and blast them off at people and see where it leads. It’s fun and the stuff you learn putting these into practice is far more likely to stick because it happened in a real life with another human. You may need to hear some things 10 or 20 times before you really get it and that’s perfectly normal. If you are paying attention and keep hearing something over and over, you’ll remember it eventually. Some stuff will slip in even when you aren’t paying attention. If you aren’t in the country, practice on your pets or your friends, but since it&#x27;s 2016 and you can fly across the world for under $1000, there&#x27;s really no excuse to not go there. If you really can&#x27;t go, then use sites like italki.com and hire private tutors. If you don&#x27;t have money, do language exchanges.<p>Avoid word lists in the beginning. Knowing how to say “Are you going?” quickly and with good rhythm and pronunciation has far more value to a beginner than “purple, aunt, cloudy, bird” or “ shoes.” Once you have structure and sounds, taking on new words is much easier. I focus on drilling sentence patterns rather than memorizing grammar rules.<p>Ideal sentences&#x2F;dialogues will look like this: The shorter and more colloquial, the better. E.g., A: What&#x27;d you do yesterday? B: I saw a movie. A: How was it? B: Not bad.<p>It’s generally ok to take some of those initial sentences from courses like Teach Yourself and Assimil if that’s all you have available, but I’d definitely put off learning anything you can’t imagine yourself saying today or tomorrow. I created a list of sentences for my students a number of years ago which you may find useful. Feel free to copy and modify. <a href="https:&#x2F;&#x2F;docs.google.com&#x2F;spreadsheets&#x2F;d&#x2F;19OQdDLq9MBoGpPy9I7rB.." rel="nofollow">https:&#x2F;&#x2F;docs.google.com&#x2F;spreadsheets&#x2F;d&#x2F;19OQdDLq9MBoGpPy9I7rB...</a>.<p>It’s worth noting that while in Thailand I took years of classes each in Chinese, Japanese and Korean. While I eventually learned to speak all 3 of them as well, I have never reached a level of fluency anywhere close to what I did with Thai. I even 2 spent solid years totally immersed myself in Japanese (using AJATT mentioned below). I had headphones in my ears blasting Japanese whenever I wasn’t talking to someone. I watched The Matrix, Shawshank Redemption and more dubbed in Japanese hundreds of times. I&#x27;d rip the audio, shuffle the scenes and listen to them when I slept, walked or exercised. I read piles of manga and novels translated from English. It turned out that none of that was a substitute for actually talking to people everyday like what I did with Thai. You need to be trying to talk to as many people as possible for an extended period of time in order to achieve any real semblance of fluency.<p>3 years after arriving in Thailand I started to teach Thai to a couple of friends. It worked great, and over the course of teaching a couple hundred people, I developed and tweaked a new more efficient approach to learning Thai. This is only important here because this is how I ended up teaching Thai and it worked so well that it changed the way I learned as well. Less steps, more sentences, no memorizing word lists. And for the sake of credibility, here’s a video of me speaking Thai in 2009, nearly 6 years after I got to Thailand. I was pretty camera shy, but it’s a general idea of what I was capable of at the time. <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=mgzXuHmO_HY" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=mgzXuHmO_HY</a>
Ask HN: To those who became fluent in a second language, what did you do?
WHAT IS FLUENT<p>First, you have to define what you mean by &quot;fluent&quot;. There are several proficiency scales that may provide some useful insight:<p>- Common European Framework of Reference for Languages (<a href="http:&#x2F;&#x2F;www.coe.int&#x2F;t&#x2F;dg4&#x2F;linguistic&#x2F;cadre1_en.asp" rel="nofollow">http:&#x2F;&#x2F;www.coe.int&#x2F;t&#x2F;dg4&#x2F;linguistic&#x2F;cadre1_en.asp</a>)<p>- ACTFL proficiency guidelines (<a href="https:&#x2F;&#x2F;www.actfl.org&#x2F;publications&#x2F;guidelines-and-manuals&#x2F;actfl-proficiency-guidelines-2012" rel="nofollow">https:&#x2F;&#x2F;www.actfl.org&#x2F;publications&#x2F;guidelines-and-manuals&#x2F;ac...</a>)<p>- Interagency Language Roundtable scale (<a href="http:&#x2F;&#x2F;www.govtilr.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.govtilr.org&#x2F;</a>)<p>Here is a quick-and-dirty self evaluation that can give you an idea of the range of what &quot;fluent&quot; can mean (<a href="http:&#x2F;&#x2F;www.govtilr.org&#x2F;Skills&#x2F;readingassessment.pdf" rel="nofollow">http:&#x2F;&#x2F;www.govtilr.org&#x2F;Skills&#x2F;readingassessment.pdf</a>).<p>WHAT LEVEL OF PROFICIENCY DO YOU WANT<p>The next question you have to answer is what proficiency level are you aiming for.<p>Most of the resources you listed are fairly good at getting a learner to a low level of proficiency (CEFRL A2, ACTFL Novice High, or ILR 1). Just try one or two and find the one you like doing (I am a fan of Duolingo, but ymmv). This level is roughly &quot;survival mode&quot; language (e.g., basic introductions, basic getting around and doing things, short and simple small talk). If your goals are higher than that, then then the process is less transparent, but it mostly involves working with authentic native materials (texts, videos, audios, etc.) and learning through interaction with those materials. Note that it is almost impossible to get beyond a very low level of proficiency with books alone -- the scope of language that would need to be covered gets too large too quickly. As your proficiency level increases, language learning texts become reference sources rather than primary sources of learning.<p>The steps of fluency roughly look something like this (using ILR scale for simplicity):<p>- Memorized words and phrases (ILR 0+).<p>- Short, simple sentences (ILR 1). Many&#x2F;most Americans I know consider this to be &quot;fluent&quot;.<p>- Basic paragraphs (ILR 2).<p>- Extended prose (ILR 3).<p>Most of the suggestions I see in this thread focus on ILR 0+ and ILR 1. There is an entire world of language and language learning beyond that. Note that I stopped at ILR 3 -- that&#x27;s the level at which a person can fully function at a professional level in most contexts. Day-to-day life is largely conducted at the ILR 1+&#x2F;2 level.<p>WHAT SKILL<p>How do you want to use the language? The four skills are reading, listening, speaking, and writing. The first two are receptive skills that develop faster than their productive skill counterpart. Note that materials that are really good for developing one skill (e.g., reading) might be much less effective or even slightly counterproductive for learning another skill (e.g., speaking). That said, it is usually good to develop all skills at least somewhat while focusing on the skills you are most interested in (i.e., if you want to read, don&#x27;t <i>just</i> read -- learning some speaking and listening will help the development of your reading).<p>HOW MUCH TIME<p>Another question is how much time do you have to dedicate to learning the language. Some languages are more linguistically distant from your native language than others, and the more distant languages take longer to learn. Here is a scale used by FSI with languages and hours of instruction needed to get to ILR 3 in one skill (usu. in speaking):<p><a href="http:&#x2F;&#x2F;www.effectivelanguagelearning.com&#x2F;language-guide&#x2F;language-difficulty" rel="nofollow">http:&#x2F;&#x2F;www.effectivelanguagelearning.com&#x2F;language-guide&#x2F;lang...</a><p>Note that the range of time required is large -- 600 hours in 6 months for Spanish or French, but 2200 hours in ~20 months for Arabic, Chinese, Japanese, or Korean. To put that in perspective, when a talented learner of Spanish is functioning at a full professional level, an Arabic learner who started at the same time will typically be functioning at a touristy sentence level.<p>BASICS OF PROCESS<p>Maybe this is a tl;dr. I am not sure that it makes sense without the above context. Note that at any time, traveling to or living in a place where the language is spoken will help tremendously. Also note that having a native informant can be very useful -- italki is a great resource for native informants.<p>1. Assuming you want to learn a relatively commonly taught language (e.g., something like Spanish or Korean rather than something like Xhosa or Igbo), pick any learning source that you like and stick with that. You will learn the sounds and script of the language as well as memorize basic words and phrases. You will eventually be able to create short, simple sentences that may or may not sound native-like. This is about ACTFL Novice or ILR 0+ or 1 level.<p>2. Start looking at level-appropriate native texts, and use learning texts as references rather than primary sources. Lower-level texts might be things like ads, announcements, or parts&#x2F;clips from videos that cover casual conversation. Higher-level texts might be newspapers, non-fiction books, most general interest TV shows (i.e., not ones on opinionated and&#x2F;or abstract topics like politics or religion). Flashcards can be useful (esp. for specialized vocabulary in a field you are interested in), but you will want to move away from flashcards and memorization gradually. You will need to immerse yourself in the language as much as possible to approach full functionality. This does not require you to be in a place where the language is spoken, but that usually helps a lot. This will get you to the ACTFL Intermediate or ILR 2 level.<p>3. To go beyond step 2, you will largely need to start functioning like a native. Your day-to-day socializing and media consumption will be almost entirely in the target language. The reference texts you typically use will be the ones that are written for native speakers of the language you are learning (e.g., a Japanese-Japanese dictionary). This is ACTFL Advanced or ILR 3 level.
Ask HN: To those who became fluent in a second language, what did you do?
Do you remember the 10 000 hours principle, popularised in Malcolm Gladwel&#x27;s <i>Outliers</i>? Well, it comes from research by Swedish-American scientist, Anders Ericsson. I talked with him. He says that there is no precise, scientific definition of &quot;fluency&quot;, so you actually cannot construct an experiment that could determine which method works best.<p>In short: to reach moderate fluency at B2&#x2F;C1 level, learning any language, would require a couple of hours every day for about 3 years. But there is -NO- optimal method!<p>I have put a considerable amount of effort to research this question as a semi-professional (currently studying applied linguistics) and for my own private use.<p>I&#x27;ve reached fluency in English (and to a lesser extent in Hebrew) as a second language. I&#x27;ve also learned and sometimes use Spanish (learned at a university), German (high school, I live in Germany now), Danish (university), French (high school), Ukrainian (university), Italian, Latin (high school), Classical Greek and Aramaic.<p>People studying full time Chinese, Arabic or any other language get their BA in 3 years and are quite fluent. It often requires about 10 hours a day of work (classes, reading, drills). It&#x27;s hard. No short cuts.<p>On the other hand, however, I&#x27;d say that you need about 50 verbs and about 200 other words with almost no grammar to communicate. Where I work I speak Portuguese (a language I don&#x27;t know!) German and Spanish with a girl from Portugal who speaks only Portuguese. The notion of &quot;learning&quot; a language is a construct of our education system. Grammar is almost useless is day to day communication. You only need both sides to wish to communicate, and there has to be no superiority and inferiority in the relationship. Somehow a natural &quot;pidgin&quot; grammar emerges spontaneously - you may not know past tense, but then you say simply &quot;yesterday&quot; + infinitive and it works perfectly well. The more I talk with Amalia the more Portuguese I get. And then I use it with two other friends from Brazil. It simply works - with no formal training, courses, textbooks. In class you are focused on correctness, not on getting your message across, and you are graded for correctness. This creates stress, confusion, doubt in your abilities.<p>My Portuguese, however, would not be good enough to get a job in Portugal. And my English, by the way, which I use with ease, would most probably be not enough to work as a journalist or in a radio station, although I read and listen to English between 5 to 10 hours every day.<p>What the research about language learning teach us? Almost NOTHING! It only confirms common truths about what helps: immersion, having no stress, living in the country, being self-reflective about your methods, good resources, practice, reading, radio, tv, vocab drills etc.<p>I talked with prof. Anders Ericsson about why is it that 40 years of serious systematic research has not produced ANY conclusions. You might have heard about Stephen Krashen and his &quot;silent period&quot; and &quot;natural acquisition method&quot;, in short: adults learn just like children. This method is very popular among polyglot YouTubers such as the popular Steve Kaufman[1]. The most important principle of this method is that you don&#x27;t learn grammar at all. The research on second language acquisition is NOT CONCLUSIVE! I believe in science (the same science that builds transistors smaller than visible light waves) and apparently Krashen&#x27;s theory has not been confirmed or rejected which means that we still have no clue what works and what doesn&#x27;t. I&#x27;ve spent tens or maybe even hundreds of hours reading about Krashen and I am only frustrated. Language research is tricky, there are dragons, don&#x27;t go there.<p>I spent over a year on scholarship studying Hebrew, I was very methodical about it, I made beautiful statistics, graphs, precisely measuring everything for 12 months and my conclusion is that: leaning a language is freakingly difficult, requires inhumane tons of hours, and that brute force works (Anki drills). I had excellent conditions, money for free, a room, teachers, no family, no concerns. I can now (slowly) read academic papers and watch movies, but I just cannot imagine anyone (not super smart) learning any language having a (intellectually demanding) day job and kids, and reaching fluency on a graduate level.<p>I am about to start leaning Arabic and I feel I will die trying (I&#x27;m 30). With just about 3-4 hours a week I expect to be able to read Judeo-Arabic in 15 years.<p>Resources (in fairly random order):<p>* Julia Herschensohn, Martha Young-Scholten (ed.), <i>Second Language Acquisition (The Cambridge Handbook)</i>, Cambridge University Press, 2013.<p>* Carol Griffiths (ed.) <i>Lessons from Good Language Learners</i>, Cambridge University Press, 2008.<p>* Christine Pearson Casanave, <i>Controversies in Second Language Writing, Dilemmas and Decisions in Research and Instruction</i>, University of Michigan, 2004.<p>* John W. Schwieter (ed.) <i>Innovative Research and Practices in Second Language Acquisition and Bilingualism</i>, John Benjamins Publishing, 2013.<p>* Anders Ericsson, Robert Pool, <i>Peak, secrets from the new science of Expertise</i>, 2016. (interesting but not strictly scholarly)<p>* Stephen D. Krashen, <i>Principles and Practice in Second Language Acquisition</i>, University of Southern California, 1982.<p>[1] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;user&#x2F;lingosteve" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;user&#x2F;lingosteve</a>
Ask HN: To those who became fluent in a second language, what did you do?
OVERVIEW: - The &quot;One(ish) Metric That Matters&quot; - Daily routines - How long it takes<p>- OMTM:<p>The ONE statistic that correlates the most with moving the needle from complete beginner to high level proficiency with the language is &#x27;Hours of Conversation w&#x2F; a native speaker&#x27;. Everything you do, and every tool you use should help increase that particular stat.<p>You learn vocab at the beginning to help yourself extend a conversation from a 1 second &#x27;Hello&#x27; to a 10 second &#x27;Hello, what do you do for fun?&#x27; + response. You work on listening so that when the person responds you can extend the conversation a few seconds further because you understood what they said.<p>The One(ISH) part of this metric is that you will notice that as your overall &#x27;Total Hours of Conversation&#x27; increases, the average &#x27;length&#x27; of each of your conversation does as well. The only way you&#x27;ll be able to realistically increase your hours of conversation is to move from 10 seconds exchanges to longer conversations.<p>At the beginning, all efforts can be judged by their merit on how much they add to your time talking. Because of this, I strongly suggest picking ONE topic and ONE specific type of person to speak with, because learning to speak at length and in depth about soccer or programming or food with a friend adds to your &#x27;Hours of Conversation&#x27; metric much faster than dabbling in all three.<p>This is why the Mormon church is able to consistently churn out fluent speakers of second languages in only a few months, because they have one kind of conversation (religious) with one type of person (prospective convert) over and over and over. Their &#x27;hours of conversation&#x27; metric is through the roof.<p>As a side note, this is what people actually mean when they say &#x27;immersion&#x27;. They mean &#x27;hours of conversation&#x27;. It&#x27;s why you can learn a second language fluently equally as well from living in the country as staying in your home country. Living in a different country just makes it easier to find people to talk with.<p>- Daily Routines: Every day, attempt to speak to a native speaker or speakers for a combined total of 1 hour. Every day, write, memorize, and include in conversation...<p>* 5 new vocab words * 3 new phrases * 1 new story<p>It helps to keep these related to each other. Here would be a good set of those if you were a beginning English speaker<p>5 Words: house, clean, messy, friends, party 3 Phrases: I need to clean the house. I am going to the party. I want to invite my friends. 1 Story: Last year, my friends said they wanted to have a party. I said that we could have the party at my house. So I cleaned the house for the whole day to get ready. Lots of people came, and we had fun. At the end of the party, my house was really messy.<p>Stories aren&#x27;t typically very long, but as you get better, you can add more detail.<p>- How long does it take:<p>Every day you need to do the 5-3-1 practice, and aim to have 1 hour of talk time with a native speaker.<p>At first, your talk time will be excruciating: How do you fill an hour of conversation with only a handful of vocab words? The short answer is &quot;you can&#x27;t&quot;, but early on you can do something like learn the phrase &#x27;what do you like to do?&#x27;, combined with the day&#x27;s vocab can take you to &#x27;what does your brother like to do?&#x27;, &#x27;what does your mom like to do&#x27;, &#x27;what does your best friend like to do&#x27;. You can repeat those questions to several different people to try to get your &#x27;Conversation Hours&#x27; as high as reasonable. Don&#x27;t be bummed out if at the beginning you are struggling to put in only more than a few minutes. You&#x27;ll get it. Try to make the next days conversations a little bit longer.<p>But, if you make it to the 90 day mark, and you have been diligent about your study efforts, it means that you have AT LEAST 90 stories, 270 phrases and at least 450 vocab words, and you will have dozens of hours of practice speaking.<p>Once you&#x27;ve arrived at that point, you will experience something magical: On day 91, go speak with a new native speaker you haven&#x27;t spoken with yet, and they will (almost without fail) ask you &#x27;how long have you been learning &lt;language&gt;?&#x27; and when you respond &#x27;3 months&#x27;, you will get your very first &#x27;HOLY <i></i><i></i>, that&#x27;s incredible! You sound like you&#x27;ve been speaking for at least a year! Maybe 2!&#x27; and you will fill with pride and think to yourself &#x27;Yeah! I am doing really wel... HOLY <i></i><i></i> I understood when that guy complimented me and I didn&#x27;t even realize it wasn&#x27;t in English!&#x27;<p>That burst of pride and excitement will be enough to carry you from working proficiency to fluent. The hardest part is convincing yourself that you are a &#x27;language learner&#x27;, and once you experience the above and realize that you are, the rest of the journey is easy.<p>Best of luck!<p>Source: My brothers and I are all former Mormon missionaries who learned second languages and were put in charge of helping along other new missionaries learn the language. My brothers learned while they were in Uruguay and Spain, I learned in suburbia, USA.
WTF is a container?
Hi there, good day fellows<p>I was expecting this kind of thread long ago, thanks guys for sharing your concerns, I am learning a lot from them!<p>IMO we can&#x27;t compare containers vs VMs like many are doing now - and I was too when first heard about Docker and containers.<p>I hold almost all VMware certs (VCP&#x2F;VSP&#x2F;VTSP for 3.1&#x2F;4.x&#x2F;5.x and VCDX), I sign the 3 major VMware P2V migrations in BR&#x2F;LA (287 P2Vs in 2005, 1600&#x2F;2008 and 2500&#x2F;2011). I was REALLY into VMware from 2000 to 2010, so I feel confident using it and recommending to many environments. I even manage some of them still today.<p>When we clone or migrate a physical machine to virtual, or whenever deploying a VM from scratch (or even using templates or copying VMdks etc) into production, we aim to develop the environment in order to see it lasting &quot;forever&quot;. We want this to be flawless, because even with given most-players virtualization deploying resources (hyper-V, VMware, xen, kVM, vbox+vagrant, etc), nobody wants to troubleshoot a production environment, we cheer to see the VMs always up and running. I remember when P2V in mentioned projects during the night, and needed to fallback physical servers because the cloned VM didn&#x27;t behaved accordingly. Please VMware, clone it ok, otherwise the troubleshoot for legacy shit will be a pain.<p>On the other side, containers are made to be replaced. They are impermanent. You can tune your images as your app&#x2F;env needs. You can have an image for each service you have. You can have many images running many containers, and some of them providing services you have. You are able to customize an image in a txt file called Dockerfile, and generate an image from this file.<p>So imagine we got this infrastructure to &quot;Dockerize&quot; , a website with a DB. Does your webserver runs apache? so you can code a Dockerfile, that will deploy an apache instance to you. It could deploy FROM an ubuntu:16.10 or 10.04, depends on what is better for your app. OR, we could just pickup Apache&#x27;s own image, like in FROM apache. You can save this image as yourid&#x2F;apache. And you can do the same regardless of what DB you are using, just install it (the mysql using apt-get in a FROM ubuntu&#x2F;debian based system), or use mysql images directly. You are able to publish the website cloning your site dev repo direct in Dockerfile itself, or you could have the website at some dir in your host, using ADD or CP to make it available in the right container dir (eg &#x2F;var&#x2F;www&#x2F;) You could even use Volumes, to mount some host dir (or a blob, or a storage account, or a partition in your storage, something available in the host itself). This is specially interesting for DB in my opinion. Once you have your Dockerfile ok, you can name it yourid&#x2F;db.<p>And if you have a main website, and a blog website, you could use the same apache Dockerfile changing only the git clone line, and save them as yourid&#x2F;apache:website and yourid&#x2F;apache:blog for example.<p>And when redeploying the container, you will have the same volume data available in the same container dir. Even if you redeployed it from ubuntu:15.10 to ubuntu 16:10.You can use the latest improvements from the freshest image (or patching some vuln), and redeploy all your containers that uses this same image at once.<p>The same goes on, you can test jenkins without knowing what OS jenkins image is made off. You dont have to worry about it. It will just work. You pull the image and run the container and voila.<p>NOW, my Docker instances are like this: I use Docker-machine, and locally I got the default and local envs. I got also an Azure instance at Docker-machine (that runs on Azure), and another instance configured in Docker-cloud using Azure as cloud provider (I use Azure due bizspark credits). So, 4 of them. All those instances, are VMs themselves. Ubuntu VMs to be sure.<p>You just replace the container (probably published in a cluster if you care enough your prod env). Not the same as with VMs at all.<p>I see Docker as a hypervisor for containers the same way VMware and hyperv are to VMs.<p>So I understand my Docker hosts VMs have the same HA, failover, load balance, resources allocation, and so many resources VMs have. I use Docker on those VMs to make easy deploys, images, tune images, really guys I was the VMware guy for so long, I went just crazy in the resources Docker gives to us.<p>Docker has many weak points indeed (NAT networking, privileged containers must be run sometimes, security concerns, etc), but again, I don&#x27;t see Docker to be erasing VMs from my life and now on I can deploy everything that will run happy forever in containers. We still need HA, failover, load balance, resources allocation and so on. Docker needs to be used together with TOOLS that allows it to run smoothly, and allows us to maintain our environments easier.<p>One of those tools are containers clusters. I work mostly with Google Kubernetes, but there are other as Docker Swarm, Apache Mesos, DCOS... Azure has its Azure Container Services ACS, IBM has its BlueMix containers, etc. Using a cluster and a deploy methodology, you are able to deploy your containers in different namespaces such as DEV &#x2F; STAGING &#x2F; PROD. You can use a BUILD environment to read your Dockerfile, build the image and deploy containers to the namespace you need. You can configure this build to trigger with a commit in the git repo for exemple.<p>So lets say we have a developer, and he needs our yourid&#x2F;apache:website to be deployed, with the new website version. If the website is already updated in your git repo, you just clone it. The Dockerfile would look like this:<p><pre><code> FROM apache MAINTAINER Your Name &lt;[email protected]&gt; WORKDIR &#x2F;var&#x2F;www&#x2F; RUN git clone https:&#x2F;&#x2F;github.com&#x2F;yourid&#x2F;website&#x2F;website.git . EXPOSE 80 CMD [&quot;&#x2F;run.sh&quot;] </code></pre> This would be named as website.Dockerfile. If you change the project git repo to any of your other sites that runs on apache, you can SAVE AS other.site.Dockerfile, and always deploy this service from this specific repo.<p>You can customize your Dockerfile of course and add support to specific codes like installing PHP, Ruby, Python, etc. You could even use Configuration Managers (CMs) as Ansible, Salt, Chef, Puppet, juju etc to apply those changes.<p>Lets say we will start the build now. We are developing this image together. So I just changed my git url on the Dockerfile. When we commit, the autobuild triggers this build in our build system (in my case, Docker-cloud or jenkins). This is what Continuous Integration (CI) and Continuous Deployment (CD) are about.<p>So when build is triggered, it gets the Dockerfile from the repo, builds from its image, deploys the container in the namespace you wanted (our case, DEV). This service could be published as website.dev.mydomain.com for example. Same concerning to staging namespace. And to www.mydomain.com when ready to production, in the PROD kubernetes namespace for example. Kubernetes is a distributed thing, so you could have minions (nodes) splitted across different datacenters, or geo locations. This pretty much reminds me of VMware VMs running inside VMfs storage made available through a set of ESXi servers, all with access to the same luns&#x2F;partitions&#x2F;volumes.<p>This is just my point of view, so please feel free to comment and ask me anything.<p>Please, just dont blame Docker because you aren&#x27;t aware of mainstream techs available nowadays. If you are comparing Docker to VMs, or SSHing inside the containers, and often mad cause your data vanished while redeploying your Docker Containers, believe me you are doing it wrong.<p>Being a pioneer is often the same: in the 90s we had to explain why Linux was good for the enterprise, in the 2000&#x27;s we had to prove VMware was really going to cluster your legacy systems, and now we have to explain what&#x27;s possible to do with Docker. And, as the tech is new (I know there were previously solaris zones, google borg, etc), but I see Docker will mature its features relying in other tools (and even copying features from k8s to Swarm eg). Docker is just one skill needed to run your stuff.<p>Cheers!<p>M Matos <a href="https:&#x2F;&#x2F;keybase.io&#x2F;mmatos" rel="nofollow">https:&#x2F;&#x2F;keybase.io&#x2F;mmatos</a>
Shame on Y Combinator
Response to Sam Altman&#x27;s post:<p><i>A Trump presidency would be a disaster for the American economy. He has no real plan to restore economic growth.</i><p>The truth is that no one knows for sure what will be best for the American economy for the next four years. There are too many variables and too much unpredictability to know for certain what a Trump president would look like.<p><i>His racist, isolationist policies would divide our country, and American innovation would suffer.</i><p>This is a fallacy. The country is already divided over immigration, and has been for a long time. The prevalence of this kind of fallacious rhetoric from elites like Sam Altman is precisely why the nationalist faction chose Trump to lead them. In order to counter this kind of deceptive propaganda, they needed someone willing to use equally powerful rhetoric in their favor.<p>Also, the claim that his policies are racist is simply not true. They are not racist.<p><i>But the man himself is even more dangerous than his policies. He&#x27;s erratic, abusive, and prone to fits of rage.</i><p>Sam cites no evidence and makes no argument to back up these exceptional claims. I have been watching Trump for an entire year, and I have not observed any erratic or &quot;abusive&quot; behavior. At least, nothing exceptional that wasn&#x27;t already directed at him. Sure, he savaged his GOP rivals with name-calling and theatrics. But they had already called him a clown, a sideshow, a circus. So it&#x27;s OK to call Trump names but not OK to for him to respond in kind?<p>Trump supporters see this hypocrisy and understand that Trump is doing what he needs to do to win the election.<p><i>He represents a real threat to the safety of women, minorities, and immigrants, and I believe this reason alone more than disqualifies him to be president. My godson’s father, who is Mexican by birth and fears being deported or worse, is who convinced me to spend a significant amount of time working on this election at the beginning of this year, when Trump still seemed like an unlikely possibility.</i><p>More blatantly fallacious reasoning. Because a Mexican national fears deportation (why?), suddenly Trump is a threat to all women minorities and immigrants?<p>Ridiculous fallacy. This is why Trump must use the rhetoric he does, because his opponents are so full of shit that they don&#x27;t even realize it.<p><i>Trump shows little respect for the Constitution, the Republic, or for human decency, and I fear for national security if he becomes our president.</i><p>I&#x27;ll grant that Trump shows little explicit respect for the Constitution. But Hillary doesn&#x27;t either. No one&#x27;s respecting the constitution because no one (Sam Altman) has been demanding it. There&#x27;s nothing in the Constitution that guarantees immigration to anyone who wants it, which so far is the only issue he has identified that has not been a grossly biased representation.<p>Speaking of which, &quot;human decency.&quot; Again, another extreme claim with absolutely NO argument or evidence to back it up. Altman boldy claims that Trump shows little respect for human decency, and yet anyone who has actually watched Trump interact with his supporters all year long shows that he is overflwing with of decency. People love him, he makes them feel good just to be around him. Here Altman is appealing to popular prejudice about Trump, which has been reinforced repeatedly by others like him in his echo chamber.<p>It&#x27;s one thing to not waste time justifying claims for which the evidence is abundant and easily found. But there&#x27;s no such evidence for these claims. There&#x27;s a lot of other people saying similarly bad things that also have no evidence. So there&#x27;s a clear reason why this perception exists. But I see no real argument.<p>As for fearing for national security? Again just another emotional appeal with no justification. What about Hillary actively antagonizing Russia? Speaking recklessly of no-fly zones over Syria, which our own Generals claim would mean going to war with Russia. This fear just seems ignorant.<p><i>Though I don’t ascribe all positions of a politician to his or her supporters, I do not understand how one continues to support someone who brags about sexual assault, calls for a total and complete shutdown of Muslims entering the US, or any number or other disqualifying statements. I will continue to try to change both of their minds.</i><p>First thing: if Sam Altman wants to understand, then he needs to learn how to listen. Trump never bragged about sexual assault, and to claim he did is a lie. Everyone repeating this is also lying. If you actually want to understand, the first thing you need to do is admit your interpretation of this is wrong. If you can&#x27;t do that, you&#x27;ll never make any progress towards understanding.<p>As for a complete and total shutdown of Muslims entering the US? Immigration is not a right. There&#x27;s no amendment that says anyone anywhere has the right to come to this country. We have the right (and some would say duty) to restrict entry to anyone for any reason. The fact is that Muslims tend to hold very different values than Americans (Sam Altman might want to ask what Peter thinks about Muslim&#x27;s beliefs on homosexuality), and there there are Muslim organizations who have made an explicit goal of destroying Western European civilization through a combination of settlement, violence, and propaganda. You can&#x27;t say that about any other major religion in the world.<p>And so of all the &quot;basket of deplorables&quot;, that is the one I will admit to. I am not racist, at least, no more than anyone can be in the US given the amount of race-baiting done by corporate media. I am not sexist. I&#x27;m not homophobic or anti-semitic.<p>But I am an Islamaphobe. Islam scares me. The religion itself scares me as does the activities of its leaders-- the so-called moderates as well as the extremists. It scares me more than any other major religion in the world. No other religion of significant size has the same combination of intolerance, subversiveness, violence, conquest, and hostility to those outside the faith. I look at the character of Islamic civilization, and I know without a doubt that I do would not want to live under those values. While I totally understand that it&#x27;s possible to interpret the Quran and Hadith in less violent and aggressive ways, and that most Muslims are just ordinary people who want to live happy and healthy lives, that&#x27;s not the trend in the world today. There&#x27;s not a single Islamic country that I would want to live in.<p>You won&#x27;t see me harassing anyone or trying to deny American Muslims their 1st amendment rights. It&#x27;s not hatred I feel. I&#x27;ll never condone &quot;hate speech&quot; or &quot;hate crimes.&quot; I do not fear ordinary Muslims, individually. I am not even opposed to having a solid and stable minority of American Muslims in perpetuity. But I most emphatically do not think we should be inviting substantial numbers of them into the country. We are not prepared to assimilate them and once there are sufficient numbers of them in the country they will start agitating to impose their political will on everyone else, which is likely to include political violence.
Ask HN: Why is everything in JavaScript changing so fast?
Javascript and the browser environment it is still pretty much glued to (Node notwithstanding) has several major architectural flaws and some unique challenges that are particularly acute for Javascript, and a lot of energy has been expended trying to figure out how to use the not-very-strong tools in a way that can support high-quality applications without too much developer effort.<p>Also, some of these things are getting addressed over time, so I&#x27;m kinda going to talk about the state of JS over the past decade rather than the state it is in right now. Some of these things are being mitigated (very few things are really being &quot;fixed&quot;), but it&#x27;s still early. These issues have historically included, but are not limited to:<p>1. Poor modularity brought on by being not just a &quot;scripting language&quot; like, say, Python, but a language actually designed to write small event handlers that fit into a single HTML tag attribute. &quot;x = 56&quot; defaults to putting x in the global namespace. The language did not provide modules or very much in the way of separation by default. You&#x27;ve pretty much always been able to namespace things, but you had to really work at it; the language did not help. (I&#x27;ll also toss in the dynamic typing here, just because I don&#x27;t want to give it a full slot here, but it does inhibit making big projects easily.)<p>2. Very poor control-flow management ability due to the choppiness of event-based programming. There has historically not been a way to maintain a call stack across events, which strips away all the tools of structured programming, a set of tools so fundamental to how we operate that we often don&#x27;t see them as a fish doesn&#x27;t see water. Promises and generators allow us to try to mitigate this, but at the cost of spending design budget; promises for instance introduce a second entirely new set of control flow mechanisms that mirror the base language&#x27;s looping and flow control constructs, particularly annoying because you must control both error and normal flow on both of these levels.<p>3. The browser&#x27;s interface to JS is the &quot;Document Object Model&quot;, which due to historical reasons is a Java-designed API bolted on to the side of JS. A native JS model could have been much more powerful and easier-to-use, requiring us to burn less design budget on simply interfacing to the browser in a reasonable manner. A lot of the design churn is attempts to answer the question &quot;How can we make manipulating the DOM more JS-native?&quot; There are also several performance issues introduced by the fact that the DOM model, combined with the rendering model, is <i>extremely</i> rich; things like manipulating a node on a page vs. detaching the node, manipulating it, and reattaching it have historically had end-user-significant differences in performance, as every DOM change triggers an incredibly complicated set of updates to a widget toolkit that was designed for flexibility rather than performance.<p>4. Browers themselves further introduce many complications. Then you have all the <i>security</i> issues that arise from being fundamentally client-server with dubiously-trustable servers. You have all the details like what cookies flow between what domains and where and when, that you need a different domain for your static content both for security and performance reasons (prior to HTTP2 particularly), and any number of crazy APIs that also <i>vary</i> across browsers, requiring the developer to use shims for things as simple as XMLHTTPRequests because you just never quite know what you actually have.<p>5. I could have list &quot;client-server&quot; in #4 there, but it&#x27;s also worthy of its own callout. Many frameworks have different solutions for client-server interactions, ranging from ignoring the problem and letting you solve it up through Meteor-like attempts to completely blur the lines between the two, and everything in-between. Client-server interaction has been further inhibited by the fact that historically, there have not been any reliable and high-performing mechanisms for streaming things between the client and server, creating a design limit around needing to be request-based, further creating a wide variety of ways of hacking around this problem, each with their own quirks.<p>6. As an open standard, nobody is really empowered to fix these problems in a coordinated way. As a result we&#x27;re sitting on top of 20ish years of standards, some well-done, some poorly-thought out, some hackily fixed after security issues, many poorly-understood by developers, and all in the browser.<p>7. Finally, one must not ignore the fact that web pages continue to become intrinsically more and more diverse. The best framework for a document-centric app is one thing, the best framework for a CRUD app another, and neither of those will help you much with an intrinsically real-time streaming app like a chat client.<p>And there&#x27;s probably a couple more dimensions I could come up with if I thought more. What you see is that there&#x27;s a lot of possibilities for mitigating all these issues (&quot;solving&quot; is often not on the table, these are mostly fundamental issues arising from layers below the JS), and the framework churn is in many ways nothing more than people combining all of the various combinations of possible answers to these problems, looking for synergies, solving different problems (per #7), and basically frantically rifling through 60+ years of computer science theory looking for the perfect solution to problems in an environment with so many fundamental strutural issues that no perfect solution is possible. Which is also why you see such vigorous advocacy sometimes; someone thinks <i>this is it, this solves all my problems once and for all</i> because it worked for a couple of weeks, and only once the community has chewed on it for a while does it become clear that there&#x27;s a lot of people with different needs for whom that doesn&#x27;t work so much, and it also didn&#x27;t actually solve all the problems once and for all after all.<p>BTW, none of this is criticism of the JS community, merely explanation. Given the hand we&#x27;ve been dealt in the web browser, lots of people trying lots of things is the best we can hope for. The fatigue is just an unfortunate, but unavoidable, side effect. The best solution for the fatigue is to concentrate more on the fundamentals being explored than the details of a particular framework. For instance, &quot;reactive&quot; programming is its own paradigm, with its own lore and learning; learning how to program that will also let you write better spreadsheets and be better at creating database triggers, for instance. Concentrate on the fundamentals. The fundamentals are not churning that fast.
Ask HN: Why is everything in JavaScript changing so fast?
I think one of the things that&#x27;s led to the mess with javascript is that it&#x27;s kind of crazy to have settled on using a language to write such large applications that:<p>- Has no standard library. - Is used to automate host platforms that have either no standard framework or very little (e.g. Node on v8 doesn&#x27;t have <i>nothing</i> but it&#x27;s quite small). - Has no module&#x2F;namespace system.<p>The effect of this is that immediately, anyone building anything in non-trivial has to make a decision about how to fill in things that would be covered in a standard library. I&#x27;m not even talking about fancy stuff, just the stuff supplied by underscore, lodash, etc. The choices are basically:<p>1) Write all the code yourself. This is an extremely low ROI option 2) Adopt a series of libraries and hope you can get them to work together 3) Adopt a framework, that by its nature, will be designed to use some sort of new pattern and this pattern will be expressed on top of whatever libraries the framework uses, so just draft behind that.<p>There aren&#x27;t really many other mainstream systems where a language is used to specialize applications on a platform where this situation obtains. Windows, OS X, iOS, Android, even Java, come with enormous, enormous amounts of library code. Generally, this library code also pushes one fairly firmly towards certain design patterns.<p>Imagine if we all decided to use...hmmm...Scheme, I guess, to develop Windows applications, and you took away everything but the lowest-level Windows APIs. I&#x27;m using Scheme because the Scheme standard outlines a remarkably small language that has nothing approaching a standard library.<p>We&#x27;d probably be in a similar situation. How do you do this? How much does adopting this or that library force you to do this or that? It&#x27;s just not very common to have a situation where to do anything non-trivial, you need to either: - adopt an enormous amount of third party code (because the situation works at every level, each third-party solution will also not be built on a standard library ecosystem) - write an enormous amount of boilerplate code.<p>And further, the issue is exacerbated by the fact that js doesn&#x27;t really have a standard module system, so even the approaches taken to fill these gaps are frequently non-compatible, or just choosing which system to use to manage modules often means replacing huge chunks of your application.<p>There are other reasons why JS the language is moving, all of the sudden: JS has had a lot of rough edges and all of the sudden a confluence of interest, capability, and technology has made it possible to sand down some of those edges.<p>But I think even if the ES process slows down, and we all agree that language-wise the features in the language are pretty good or something happens that freezes, we&#x27;ll keep seeing lots of churn because of the interaction between every project being built on towers of third party code with far-reaching effects.<p>When Java, for instance, was released, the standard library it shipped with included: - a set of standard container classes: hashes, vectors, arrays, etc. - a rich set of real types (e.g. numbers that weren&#x27;t insane, Objects for compatibility with the object system, primitive types in the language for performance) - a whole tree of calendar&#x2F;date manipulation code - a standard way of connecting to SQL databases and all the code for that - a standard GUI-drawing and event-handling system - a big, rich set of APIs for handling IO of various kinds with different performance&#x2F;complexity tradeoffs. - a concurrency api, threadpools, etc.<p>That&#x27;s just a sample. And all of the bits worked together, and sort of implied patterns in how things should be built and used.<p>You could certainly replace anything, should you choose to, in your project (e.g. the calendar stuff really sucked), but the point is, there was a default, and there was a standard pattern for where and how to bring in new library code, and that new library code would in turn be built on as little third-party code as possible.<p>Look at an average iOS project&#x27;s Podfile or Carthage file, Android project&#x27;s Gradle file, and then look at a Node or browser js project&#x27;s package.json. Then actually look in Pods and node_modules. The average amount of third party libs in a JS project is many orders of magnitude higher than in the others.<p>There are like, what, five or so competing implementations of Promises in javascript land? And they aren&#x27;t mutually compatible always, so this means if you want to use non-ES6 babel-compiled Promises in your code, you have to: - choose a library - hope that other libraries and frameworks you use use the same style - add shims or something where they don&#x27;t, or else switch out the library and framework.<p>This is just like...no one writing an Android app is like &quot;which Runnables library are you using?&quot;. No one writing an iOS app is like &quot;soo...the new GCD spec looks interesting, which GCD lib are you using for your project? Oh yeah how spec-complete is it? Does it work with AFNetworking?&quot;<p>&quot;Lodash makes JavaScript easier by taking the hassle out of working with arrays, numbers, objects, strings, etc. Lodash’s modular methods are great for:<p>Iterating arrays, objects, &amp; strings&quot;<p>Iterating arrays, objects, &amp; strings is something that most other languages have decent std lib support for, and so even your third-party libs will use the host language&#x27;s affordances. In Javascript you have to ask these questions all the time.
How to Accept Over-Engineering for What It Really Is
Over-engineering is a tough thing to discuss and there&#x27;s never going to be a suitable answer anyone agrees upon. It is also a huge topic with a scope far beyond this textbox or a medium article.<p>Anyway, among many of the big issues here are so many things are contextual - the problem domain, stakeholders, programmer abilities, business climate, etc. It&#x27;s been my experience that you cannot apply judgement to a solution without understanding as many of the variables involved as possible. Shifting these variables even slightly can change everything.<p>For example, say I build a feature A at company Y and then I change jobs next week and to company Z which is much smaller. Company Z wants the same feature I just built at company X, but of course I can&#x27;t steal it outright. Building the same thing at company Z requires different engineering decisions, given the different climate and probably different development team, programming language, and so on hypothetically. Even if I could just copy the solution, at company Y it might be perfect, while at company Z it might be considered over-engineered. It&#x27;s a combination of both objective and subjective judgements.<p>As for the concrete implementations of features, I often am driven crazy by both over and under engineering. I worked with several people in positions of power above or horizontally to me that would routinely shoot down many well-engineered features and implementations by other people because they were, &quot;Too complicated.&quot; I began to realize this was often shorthand in most cases for, &quot;I know nothing about this, my ego or political stance is threatened, I lack talent, and&#x2F;or I am simply a moron.&quot; Of course I also hit plenty of actually over-engineered things that were indeed exactly this, but in many of these cases it was obvious. Where it is less clear is anything of decent scope, challenge, subjectivity, or other hard to quantify measure. I think people can look at something like the FizzBuzz and see that the parody versions using OO 10 design patterns with abstract factories are obviously over-engineered, while it gets much harder in the context of a big project.<p>And this leads to another problem. Complexity is not over-engineering, nor vice-versa always. Nor is scope, or code size, or any of these related concepts that come to mind. A very small amount of code can be incredibly complex, a large amount of code can be simple, and so on. Either can be over or under engineered. Just look at math - some of the most useful tools are very difficult to understand, while others are incredibly simple with tons of layers, nuances, and so on. Programming is not that much different in this regard. We&#x27;ve all seen things done in a tiny amount of code we would have never thought of and maybe can&#x27;t even understand. Likewise, we&#x27;ve encountered huge amount of code that someone complains about but is written clearly, thoughtfully, and cleanly. These&#x27;s a good related talk on some of these ideas by Rich Hickey if I recall.<p>Another issue is that it is hard to quantify both knowing what and when justifies substantial engineering efforts. As I mentioned, context plays a big part, as does intuition and experience (in relation to ability, not just pure years). These are all nearly impossible to quantify, and yet must play a large role in cases where large engineering decision making. A lot of people see things they don&#x27;t understand or are new to them and just throw out the over-engineered card. Likewise, a lot of people love pulling out their favorite tool (ex: SQL, Framework of choice) and try to use it as a golden hammer. Anything else is &quot;over-engineered.&quot; These things can lead to over-engineering or at the very least, poor engineering.<p>Over-engineering is also something that is relative. It is really hard sometimes to work with someone else and to get them to understand especially those hard to quantify things such as intuition. Programming can also often highly be about ego and ownership. People will not hesitate to point at something not written by them as over-engineered because it was not done how they would do it.<p>Game programming for me again comes to mind as an area where this is a constant pain. The inexperienced game programmer will both under and over engineer things like you&#x27;ve never seen. First, they&#x27;ll try to use their favorite language to write a game. They&#x27;ll then do things when they learn a technique like OO programming and try to apply it everywhere, to everything. &quot;A monster? Obviously I should make a monster base class, then subclass this for an Orc, then subclass it again for an Elite Orc.&quot; Everything must be designed around the tools they know instead of the problem. The minute this person does enough in their career to get any real authority or a real job, they take this unchecked mentality and try to impose it on the people around them. I see the same thing in web application and desktop programming. Someone learns lambdas, functors, currying, design patterns, IOC, monads, etc. and they must apply them to any situation that triggers this recognition. The &quot;engineering&quot; then comes from the solution and tools, perhaps the XY problem in many cases. Usually these people also fail to consider the aforementioned contextual concerns.<p>I know in game programming this happened often where some programmer would come in and replace some lengthy looking code that made heavy use of arrays and manual for loops. They&#x27;d then go in and replace it all with vectors, iterators, and all kinds of heavier stuff. They never considered that the code was engineered that way for performance or for portability or backward compatibility or some other reason. Moreover, they didn&#x27;t actually know that the seemingly &quot;simpler&quot; solution was actually making things as a whole more difficult by pushing out these concerns to be fixed or handled elsewhere (ex: making up performance somewhere else).<p>On a personal note, I am often annoyed by the over-engineered finger pointing. There are so many things like XML messes and 200 steps to setup something for a simple purpose that are easy enough to agree on as over-engineered. Most of these come down to asking yourself, &quot;What problem am I trying to solve&quot; and perhaps repeating those steps aloud. If you feel ridiculous saying them, especially to another person, something might be wrong. But if it turns out that the problem you are trying to solve really does require something that is not a sound-bite, the problem explanation will also often have to be long and cannot be skirted around without ignoring the problem.<p>Too often I see things in this industry thrown out there to &quot;fail hard,&quot; to be an MVP, to test the market, to disrupt, to fill a gap. That&#x27;s all fine, but make sure you are solving the problem and not creating new ones like security, performance, misleading people, defrauding investors, or worse because you thought engineering a proper solution is too hard or takes too long. Find the sweet spot and make informed decisions. I recommend another talk about Rich Hickey (sorry two by the same person, just memorable talks) about Hammock-driven development and thinking before acting. If more people did that, it could swing either way and maybe we would have less short-term &quot;stuff&quot; but better long-term solutions, less over-engineered messes, and things that actually are worth building on top of as bases. As it stands now, this industry is littered with poorly engineered solutions we were stuck with because of both over and under engineering.
Anti-Aging Startup Raises $116M With Bezos Backing
Press release from UNITY Biotechnology:<p><a href="http:&#x2F;&#x2F;www.prnewswire.com&#x2F;news-releases&#x2F;unity-biotechnology-announces-116-million-series-b-financing-300352831.html" rel="nofollow">http:&#x2F;&#x2F;www.prnewswire.com&#x2F;news-releases&#x2F;unity-biotechnology-...</a><p>Which includes reference to the latest research they&#x27;ve published, on senescent cells in atherosclerosis pathology. This crowd likes timing their press and their research releases.<p>My comments:<p>------- The whispers of late have had it that UNITY Biotechnology was out raising a large round of venture funding, and their latest press release shows that this was indeed the case. The company, as you might recall, is arguably the more mainstream of the current batch of startups targeting the clearance of senescent cells as a rejuvenation therapy. The others include Oisin Biotechnologies, SIWA Therapeutics, and Everon Biosciences, all with different technical approaches to the challenge. UNITY Biotechnology is characterized by a set of high profile relationships with noted laboratories, venture groups, and big names in the field, and, based on the deals they are doing, appear to be focused on building a fairly standard drug development pipeline: repurposing of apoptosis-inducing drug candidates from the cancer research community to clear senescent cells, something that is being demonstrated with various drug classes by a range of research groups of late. Senescent cells are primed to apoptosis, so a nudge in that direction provided to all cells in the body will have little to no effect on normal cells, but tip a fair proportion of senescent cells into self-destruction. Thus the UNITY Biotechnology principals might be said to be following the standard playbook to build the profile of a hot new drug company chasing a hot new opportunity, and clearly they are doing it fairly well so far.<p>So this, I think, bodes very well for the next few years of rejuvenation research. It indicates that at least some of the biotechnology venture community understands the likely true size of the market for rejuvenation therapies, meaning every human being much over the age of 30. It also demonstrates that there is a lot of for-profit money out there for people with credible paths to therapies to treat the causes of aging. It remains frustrating, of course, that it is very challenging to raise sufficient non-profit funds to push existing research in progress to the point at which companies can launch. This is a problem throughout the medical research and development community, but it is especially pronounced when it comes to aging. The SENS view of damage repair, which has long incorporated senescent cell clearance, is an even tinier and harder sell within the aging research portfolio - but one has to hope that funding events like this will go some way to turn that around.<p>From the perspective of being an investor in Oisin Biotechnologies, I have to say that this large and very visible flag planted out there by the UNITY team is very welcome. The Oisin team should be able to write their own ticket for their next round of fundraising, given that the gene therapy technology they are working on has every appearance of being a superior option in comparison to the use of apoptosis-inducing drugs: more powerful, more configurable, and more adaptable. When you are competing in a new marketplace, there is no such thing as too much validation. The existence of well-regarded, well-funded competitors is just about the best sort of validation possible. Well funded competitors who put out peer-reviewed studies on a regular basis to show that the high-level approach you and they are both taking works really well is just icing on the cake. Everyone should have it so easy. So let the games commence! Competition always drives faster progress. Whether or not I had skin in this game, it would still be exciting news. The development of rejuvenation therapies is a game in which we all win together, when new treatments come to the clinic, or we all lose together, because that doesn&#x27;t happen fast enough. We can and should all of us be cheering on all of the competitors in this race. The quality and availability of the outcome is all that really matters in the long term. Money comes and goes, but life and health is something to be taken much more seriously.<p>Now with all of that said, one interesting item to ponder in connection to this round of funding for UNITY is the degree to which it reflects the prospects for cancer therapies rather than the prospects for rejuvenation in the eyes of the funding organizations. In other words, am I being overly optimistic in reading this as a greater understanding of the potential for rejuvenation research in the eyes of the venture community? It might be the case that the portions of the venture community involved here understand the market for working cancer drugs pretty well, and consider that worth investing in, with the possibility of human rejuvenation as an added bonus, but not one that is valued appropriately in their minds. Consider that UNITY Biotechnology has partnered with a noted cancer therapeutics company, and that the use of drugs to inducing apoptosis is a fairly well established approach to building cancer treatments. That is in fact why there even exists a range of apoptosis-inducing drugs and drug candidates for those interested in building senescent cell clearance therapies to pick through. Further, the presence of large numbers of senescent cells does in fact drive cancer, and modulating their effects (or removing them) to temper cancer progress is a topic under exploration in the cancer research community. So a wager on a new vision, or a wager on the present market? It is something to think about. -------
Aks HN: How did you improve the quality of your code base?
This might be difficult to do but can you explain your overall architecture design? What sorts of issues are you having? Where are the pain points?<p>Going from top to bottom:<p>&gt; Our company builds a java openGL CAD&#x2F;CAM application suite for windows desktops<p>If it&#x27;s an application suite then, from my understanding, you&#x27;ll be building a main set of libraries and then a set of tools that all use these libraries. Have you considered a hierarchical plugin design? Have a main application that starts and setups all of your main rendering and CAD&#x2F;CAM magic. Then go from there to working out a simplest of APIs to what everything actually <i>needs</i> access too.<p>Your main application basically just manages UIs&#x2F;drawing to an OpenGL port. From there you can load modules to do other things. If you abstract what is needed then each module should only need to define How a functionality is executed, not where and what a functionality should look like in the UI. For instance refactor your code to follow such a structure:<p><pre><code> Master UI System (Exposes: &quot;Options&quot;, &quot;Renderables&quot;, &quot;Views&quot;) -&gt; Drafting Plugin (Exposes: &quot;Models&quot;, &quot;Collision&quot;, &quot;Faces&quot;) -&gt; CAM Plugin (Exposes: &quot;Routing Paths&quot;) </code></pre> Master UI does not need to know anything about Design Plugin and CAM Plugin.<p>Drafting Plugin needs to know about Master UI but nothing about CAM Plugin.<p>CAM Plugin needs to know about Master UI and Drafting Plugin.<p>That&#x27;s what I would try and do if this was a new project but this isn&#x27;t one and uprooting your entire (or even any recognizable percentage of your code base) is unreasonable.<p>&gt; We have a couple tens of million LOCs, with ~50 projects and 1000s of packages<p>If you&#x27;ve got that many packages then you might want to find out what sort of abstractions are being used, not working correctly, and remove them&#x2F;replace them with simpler solutions. How much of these packages are filled with Interfaces&#x2F;Abstract Classes&#x2F;Implementations of interfaces<p>&gt; After ~10 years of neglection we need a strategy to increase the code quality (lots of dependencies, feature envying inheritance hierarchies, spaghetti code, similar problem are solved in myriad ways, all that jazz).<p>One at a time:<p>&gt; lots of dependencies<p>Slowly replace dependencies by either abstracting features further, replacing with new standard library features, or by implementing other solutions to the same problems. Every dependency is an added layer of complexity in my book so it&#x27;s best to avoid this as much as possible.<p>&gt; feature envying inheritance hierarchies<p>This comes as a side effect of not knowing what a level of abstraction is actually meant to be doing. Have a team meeting and ask what each team thinks the actual problems that are needing to be solved are. The people knee deep in crap will have a better idea of what&#x27;s the correct or natural abstraction for these cases if the ones currently being used are unnatural. It may just be that the code base has had too many large scale changes or even just have had too many features pushed in at once (which for a CAM&#x2F;CAD tool is definitely not unheard of, this is a very specialized and hard task)<p>&gt; spaghetti code<p>Get some sort of static analyzer. I remember one group I worked with used Sonar. Also remember that the best code quality tool is a good agreed upon set of standards. Somethings that have worked for me on some group projects I&#x27;ve worked on has been: Avoid complicated constructors, always default a variable to final, avoid complicated logic statements always exit early rather then filter before in a for loop, use all the up-to-date constructs to aid with code clarity (try(stream), for(var:set), and more).<p>&gt; similar problem are solved in myriad ways<p>If there is one problem that exists in two places this is an opportunity for you to pull the part out, abstract it, and use it as a library. This is a double edged sword since these two parts actually need to contain the same problems which some times is not the case.<p>Now to the nitty gritty:<p>&gt; How do you measure code quality? How do you interpret the metrics?<p>(How many times does the code result in an error) * (The time in hours that it takes to debug the code).<p>Larger number is worse. Keep a notebook&#x2F;log of these times, graph them, and use that as a map to decide what is worth refactoring. If a piece of code &quot;just works&quot; but looks ugly it can wait to be refactored if there is another piece of code that looks &quot;visually appealing&quot; while still causing daily side effects in the active development of the project.<p>&gt; What are good tools for a windows&#x2F;java&#x2F;eclipse dev environment?<p>I&#x27;ve always managed ANT scripts for my group projects since they are very very cross platform. Maven works great but I&#x27;m not a fan of the complexity of install for non-linux users. Also check out IntellJ for built-in maven support.<p>&gt; How do you act on the metrics and actually improve code quality?<p>Change your code by coming at it from a different perspective. If that perspective yielded a more promising piece of code (that is easier to understand, causes less side effects, and uses less external&#x2F;non-standard functionality) then you keep it. A lot of my code I write is code I throw away. This is much harder to justify to business people but it&#x27;s an important part of the process to sketch up what you think <i>might</i> work even if the attempts aren&#x27;t always fruitful.<p>&gt; Can you recommend any resources of success stories on how companies managed to increase code quality of a big, tangled system?<p>Check out the U.S. Digital Service for the only recent success story that comes to mind [1].<p>If anyone knew the secret sauce they wouldn&#x27;t give it out for free. The ability to &quot;Fix&quot; all the &quot;Broken&quot; projects isn&#x27;t an issue on the scales that we think they are. A large portion of all technology-related projects fail [0]. If anyone could prove they where able to reliably fix these issues they&#x27;d be billionaires over night.<p>[0] - <a href="http:&#x2F;&#x2F;www.zdnet.com&#x2F;article&#x2F;study-68-percent-of-it-projects-fail&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.zdnet.com&#x2F;article&#x2F;study-68-percent-of-it-projects...</a><p>[1] - <a href="http:&#x2F;&#x2F;www.theatlantic.com&#x2F;technology&#x2F;archive&#x2F;2015&#x2F;07&#x2F;the-secret-startup-saved-healthcare-gov-the-worst-website-in-america&#x2F;397784&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.theatlantic.com&#x2F;technology&#x2F;archive&#x2F;2015&#x2F;07&#x2F;the-se...</a><p>Edit: Removed &quot;What do you mean by thousands of packages?&quot;<p>Looking forward to what you think of all this.
General questions about the Airbnb Community Commitment
Airbnb is a essentially &quot;middle man&quot; - a company that facilitates private individuals who wish to rent out their homes to strangers. This is a wonderful and much appreciated way of connecting strangers and building civil society bonds. But dictating to its members exactly on what terms and to whom they should be renting out their own bedrooms and homes seems to be, no matter how well-intentioned, to be self-defeating: it fosters an atmosphere of distrust and removes renters&#x27; freedom to exercise discretion about who stays in their home.<p>It is inevitable that some renters will bring racist or xenophobic or other prejudices to the table when they decide who to rent their homes to. But there will be a whole range of positive and negative preferences about the type of person one wants to stay in one&#x27;s home, many of which many not be motivated by racism or xenophobia, but by personal judgments about who one is prepared to open one&#x27;s home to.<p>Airbnb is trying to micro-manage how people exercise their judgment about who is a good fit for their home. They are trying to force people to trust everyone equally or to feel equally well-disposed toward all potential renters, as a condition for using their service. They may have the LEGAL right to do this, but it will be impossible in many cases to enforce with any reliability.<p>Besides the notorious difficulty of enforcing this sort of discrimination edict without high levels of inteference and second-guessing of complex judgments, in my view, the new policy is likely to undermine, not promote, greater trust and respect betweeen renters and landlords, by fostering a more adversarial culture in Airbnb homes, where any refusal to rent is met with an air of suspicion and resentment and exclusion, as though opening your home to someone (even for money) was not a delicate matter.<p>Cultural change and reform comes through education and experience. Airbnb permits people to be exposed to different cultures and values by opening up their home to strangers (and receiving payment in return).<p>But I see no reason why Airbnd should appoint itself a sort of &quot;moral policeman&quot; to ensure that all renters are equally open to different cultures and communities. That kind of openness can be encouraged but it is quite absurd to think that it can be truly fostered in a positive way by getting people to tick a &quot;community commitment&quot; box before renting out their homes.<p>In fact, I would argue that this new &quot;community commitment&quot; could be considered ethically dubious at best, since it will provide a strong reason to people who rely on Airbnb but wish to exercise their own judgment about who stays in their home to lie on the website. Furthermore, the effort to get people to formally &quot;commit&quot; to what is essentially an ethical attitude in a quasi-contractual way, as a condition for using this type of renting &quot;middle man&quot; is an extraordinary act of over-reach, it seems to me, insofar as it essentially means that Airbnb feel they can appoint themselves the arbiter and judge of people&#x27;s private motives and prejudices, whether through some formal declaration on their part, or through a statistical analysis of their behaviour.<p>Which raises the question, if Airbnb is worried about unjust discrimination in society at large, why does it think that setting itself up as a sort of &quot;thought police&quot; for its customers is a wise move? How can they not anticipat the inevitable resistance and backlash that will unleash, and its almost certain failure in practice to reform people&#x27;s behaviour and attitudes (tick the box and move on)? And what does this sort of policy tell us about the type of authority that a middle man THINKS he has over his clients and their values, preferences, and lifestyles choices?<p>Is there some sort of &quot;saviour&quot; complex going on here, where a company thinks they must engage in an aggressive campaign to control their users&#x27; mindsets and micromanage their own decisions about who to rent their homes to? Or is the new Airbnb policy, as some have suggested, just a response to some legal or social pressures to &quot;look good and inclusive&quot;?<p>Whatever the answer to these questions, it strikes me that setting aside the legality of this new policy, the level of micromanagement and control it extends into clients&#x27; USE of the service and indeed into their values and attitudes regarding hosting people in their home, suggests a lack of trust in people&#x27;s goodness and an unwillingness to take risks on people&#x27;s goodness, to give them reasonable discretion to exercise their own judgments in the sphere of their own home (even if it is being rented out for profit).<p>Indeed, this sort of campaign, which comes close to being a sort of indirect &quot;mind control,&quot; seems to bespeak an impatience with the messiness of human life and human relationships, and of course impatience with idiosyncratic and unstructured nature of the motives of people who rent out their own homes. Sometimes, in order to foster or preserve an atmosphere of trust and respect in general, you have to allow within a system for the possibility that some people will exercise bad or unfair judgments, or treat some people without the full respect they deserve. Making a rule to compel everyone to be respectful is not always the best way to foster a culture of respect.<p>Turning a modest facilitating service into a crusade for full inclusion and a change in cultural mindsets completely changes the nature of the Airbnb service, bringing it into the zone of a sort of &quot;mind police&quot; whose edicts will frequently be impossible to enforce.<p>It is an excellent example of the trend in our society to attempt to control from on high, with relatively crude regulations, the delicate flow of human relationships and attitudes between different groups, ethnicities, value identifications, religions, etc.<p>To be clear, I am not advocating racisms or invidious discrimination, but I am suggesting that (a) some degree of discrimination and profiling is a fact of life especially in the business of renting out one&#x27;s own home, and it is not necessarily invidious, especially in situations of sparse information; and (b) to the extent that people do engage in invidious forms of discrimination when they rent out their homes, Airbnb is certainly not the appropriate entity to be rooting this out systematically - education and cultural reform must be carried out by winning over people&#x27;s hearts and minds, and this work is already being done by the mere fact of cultural exchange permitted by the Airbnb network. Why spoil that work by implementing a policy that is likely to foster distrust, suspicion, and resentment among renters and proprietors?
CenturyLink to Buy Level 3 for $34B
I was an employee for Level 3, coming from the Global Crossing acquisition, for around 17 years, choosing to leave early last year (and unlike most former Level 3&#x2F;Global Crossing employees, this choice was not forced upon me).<p>Internally (and by, internally, I mean within my team, not within management or anyone who makes decisions of this nature), we&#x27;d always seen CenturyLink as an interesting prospect for merger. The two companies&#x27; footprints and businesses appeared to compliment each other. It was generally dismissed out of hand because of the consumer side of CentryLink. Level 3 (and even less so with Global Crossing), focused on carriers and Fortune 50-100 businesses as their core and shied away from the more expensive, less profitable consumer facing pieces.<p>A bit of history for those who weren&#x27;t around in the 90s: When thinking CenturyLink, think Qwest (and commercials about the little motel in the desert with &quot;Cable TV&quot; replaced by media services delivering every television show and movie produced in the history of ever). They were one of the formerly local telecoms that expanded into long distance&#x2F;fiber&#x2F;internet after the 1996 telecom deregulation[0].<p>Level 3&#x27;s business is Carrier and Enterprise with much of the Enterprise piece coming from the Global Crossing side of it because, at the time, we effectively couldn&#x27;t compete with Level 3. We&#x27;d come in to bid a project at a price we could eek out a small profit on and would be undercut because they owned far more local which had the effect of lower cost of access and lower complexity for the company we were selling to. Our focus was Enterprise where the margins were higher, we could work with other carriers to provide the services (often Level 3) and step in with a better understanding (and willingness to &quot;do practically anything&quot; to win the contract -- our CEO, after all, was John Legere and the way he runs T-Mobile came from the way he ran Global Crossing: &quot;Hug the Customer&quot; was a mantra).<p>Level 3 (like all telecoms) is a run to the bottom as far as price is concerned. Cost of access is pretty much <i>it</i> in this business. The expense is so large it eclipses pretty much everything else. Being able to move more things onto your own network reduces that expense (and in-turn results in revenue from others paying <i>you</i> for access to those local components). This fits well with CenturyLink.<p>The rumblings of this deal internally were strong over the last few months (I don&#x27;t work there any longer and <i>I</i> heard the rumors[1]). Since this had come up from time to time, I wasn&#x27;t surprised to hear it again and it still came with the difficulty of figuring out how a deal like that would work. Internally, most employees assumed it&#x27;d be a Level 3 purchase of CenturyLink, but a look at the fundamentals of the two businesses made something like that wishful thinking on the part of employees who are really tired of all of the layoffs and really didn&#x27;t want to see a large one that would come as a result of being purchased.<p>This will be an interesting transition for the employees of Level 3 proper. They&#x27;re used to doing the buying and they&#x27;re actually <i>more</i> used to being the company that comes in, strips the company they purchased (of staff) and imposing the &quot;Level 3 way&quot; on the purchased entity. It was clear that was their position during the Global Crossing merger and morale became greatly affected when some Global Crossing employees took leadership positions and imposed &quot;The Global Crossing way&quot; on Level 3. This resulted in a pretty dramatic culture clash that wasn&#x27;t really resolved even by the time I left (which was shortly after the TW Telecom merger!). At least at that point they were <i>still</i> suffering getting the various pieces&#x2F;parts of the company together and operating as a single, well oiled, machine. Adding this to the mix will further complicate those efforts. Level 3 was known for being good at making a purchase and bad at integrating that purchase. I think they did a better job with the Global Crossing and TW acquisitions, but &quot;better&quot; was in comparison to the &quot;abysmal&quot; job they did with the six that were there prior. They still have a history (and current?) reputation of shedding jobs about every 6 months (5-10% across the board) that despite having a better few years, didn&#x27;t change after I left[3]. They have difficulties hiring top talent as a result, though I&#x27;m sure this problem exists across telecom unless you&#x27;re one of the two big guys.<p>Apologies for the lengthy and poorly revised post. The speculation contained within is my own and has not been influenced by internal employees -- and may be wildly off since I haven&#x27;t been an employee there for well over a year, but I thought I&#x27;d share in case it spurs further discussion that irons out some of the wrinkles. This will be an interesting change in the landscape of telecom, putting a really large competitor against some of the &quot;bigs&quot; who&#x27;s reputation is best summed up by this SNL sketch: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=CHgUN_95UAw" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=CHgUN_95UAw</a><p>[0] Which is in and of itself a terrible description. It was less a &quot;deregulation&quot; than a &quot;re-regulation&quot; and like all government regulations of this kind, it defined a set of &quot;winning and losing business strategies&quot; in this sector that were different than the strategies that existed before. And a set of tricks&#x2F;arbitrages that would create entirely new businesses designed to provide nearly free services by leveraging cost of access (in a quasi tariff style).<p>[1] Before I get anyone in trouble, the rumors I heard were not from people who would have been in any position to know about something like this and were little more than the speculation of previous years ... along the lines of &quot;wouldn&#x27;t it be great?&quot;. I was able to connect the dots, though, by discovering that certain of my former coworkers were unusually busy -- so busy that I couldn&#x27;t get in touch with them due to their workload. Knowing what they were often involved in, and combining the increase in talk about CenturyLink led me to fully believe this deal was going to land at some point. As a result, I didn&#x27;t do any stock transactions to avoid the appearance of having &quot;insider information&quot; that I didn&#x27;t reliably or accurately have.<p>[2] A look at the fundamentals of both business indicated that as wishful thinking. It was clear to me if there was going to be a purchase, it was Level 3 who was getting bought.<p>[3] This was a small bit of my motivation for leaving. I&#x27;d been through 30-35 &quot;Reductions in Force&quot; and came out still employed and had continued to have the confidence of management up to the VP level, so I was not concerned about losing my job, but all of those &quot;RIF&quot;s take a toll on you. I&#x27;m still amazed, to this day, how efficiently our process for laying off employees had become. We had entire systems&#x2F;applications built for the task and it was these sorts of things. That sort of thing bleeds into the culture of the company and it was a culture I had grown very tired from.
R 3.3.2 is released
For the lazy:<p>NEW FEATURES<p><pre><code> extSoftVersion() now reports the version (if any) of the readline library in use. The version of LAPACK included in the sources has been updated to 3.6.1, a bug-fix release including a speedup for the non-symmetric case of eigen(). Use options(deparse.max.lines=) to limit the number of lines recorded in .Traceback and other deparsing activities. format(&lt;AsIs&gt;) looks more regular, also for non-character atomic matrices. abbreviate() gains an option named = TRUE. The online documentation for package methods is extensively rewritten. The goals are to simplify documentation for basic use, to note old features not recommended and to correct out-of-date information. Calls to setMethod() no longer print a message when creating a generic function in those cases where that is natural: S3 generics and primitives. </code></pre> INSTALLATION and INCLUDED SOFTWARE<p><pre><code> Versions of the readline library &gt;= 6.3 had been changed so that terminal window resizes were not signalled to readline: code has been added using a explicit signal handler to work around that (when R is compiled against readline &gt;= 6.3). (PR#16604) configure works better with Oracle Developer Studio 12.5. </code></pre> UTILITIES<p><pre><code> R CMD check reports more dubious flags in files ‘src&#x2F;Makevars[.in]’, including -w and -g. R CMD check has been set up to filter important warnings from recent versions of gfortran with -Wall -pedantic: this now reports non-portable GNU extensions such as out-of-order declarations. R CMD config works better with paths containing spaces, even those of home directories (as reported by Ken Beath). </code></pre> DEPRECATED AND DEFUNCT<p><pre><code> Use of the C&#x2F;C++ macro NO_C_HEADERS is deprecated (no C headers are included by R headers from C++ as from R 3.3.0, so it should no longer be needed). </code></pre> BUG FIXES<p><pre><code> The check for non-portable flags in R CMD check could be stymied by ‘src&#x2F;Makevars’ files which contained targets. (Windows only) When using certain desktop themes in Windows 7 or higher, Alt-Tab could cause Rterm to stop accepting input. (PR#14406; patch submitted by Jan Gleixner.) pretty(d, ..) behaves better for date-time d (PR#16923). When an S4 class name matches multiple classes in the S4 cache, perform a dynamic search in order to obey namespace imports. This should eliminate annoying messages about multiple hits in the class cache. Also, pass along the package from the ClassExtends object when looking up superclasses in the cache. sample(NA_real_) now works. Packages using non-ASCII encodings in their code did not install data properly on systems using different encodings. merge(df1, df2) now also works for data frames with column names &quot;na.last&quot;, &quot;decreasing&quot;, or &quot;method&quot;. (PR#17119) contour() caused a segfault if the labels argument had length zero. (Reported by Bill Dunlap.) unique(warnings()) works more correctly, thanks to a new duplicated.warnings() method. findInterval(x, vec = numeric(), all.inside = TRUE) now returns 0s as documented. (Reported by Bill Dunlap.) (Windows only) R CMD SHLIB failed when a symbol in the resulting library had the same name as a keyword in the ‘.def’ file. (PR#17130) pmax() and pmin() now work with (more ?) classed objects, such as &quot;Matrix&quot; from the Matrix package, as documented for a long time. axis(side, x = D) and hence Axis() and plot() now work correctly for &quot;Date&quot; and time objects D, even when “time goes backward”, e.g., with decreasing xlim. (Reported by William May.) str(I(matrix(..))) now looks as always intended. plot.ts(), the plot() method for time series, now respects cex, lwd and lty. (Reported by Greg Werbin.) parallel::mccollect() now returns a named list (as documented) when called with wait = FALSE. (Reported by Michel Lang.) If a package added a class to a class union in another package, loading the first package gave erroneous warnings about “undefined subclass”. c()‘s argument use.names is documented now, as belonging to the (C internal) default method. In “parallel”, argument recursive is also moved from the generic to the default method, such that the formal argument list of base generic c() is just (...). rbeta(4, NA) and similarly rgamma() and rnbinom() now return NaN‘s with a warning, as other r&lt;dist&gt;(), and as documented. (PR#17155) Using options(checkPackageLicense = TRUE) no longer requires acceptance of the licence for non-default standard packages such as compiler. (Reported by Mikko Korpela.) split(&lt;very_long&gt;, *) now works even when the split off parts are long. (PR#17139) min() and max() now also work correctly when the argument list starts with character(0). (PR#17160) Subsetting very large matrices (prod(dim(.)) &gt;= 2^31) now works thanks to Michael Schubmehl’s PR#17158. bartlett.test() used residual sums of squares instead of variances, when the argument was a list of lm objects. (Reported by Jens Ledet Jensen). plot(&lt;lm&gt;, which = *) now correctly labels the contour lines for the standardized residuals for which = 6. It also takes the correct p in case of singularities (also for which = 5). (PR#17161) xtabs(~ exclude) no longer fails from wrong scope, thanks to Suharto Anggono’s PR#17147. Reference class calls to methods() did not re-analyse previously defined methods, meaning that calls to methods defined later would fail. (Reported by Charles Tilford). findInterval(x, vec, left.open = TRUE) misbehaved in some cases. (Reported by Dmitriy Chernykh.)</code></pre>
A New Spin on the Quantum Brain
This reminded me of talk by David Mermin, about Michael E. Fischer: an important statistical physicist and the father of Matthew Fischer, as they mention in the text.<p>As with almost everything David Mermin writes, I specially enjoyed this talk: &quot;My Life with Fischer&quot; [0]. In this talk he describes how great, but scientifically &#x27;unforgiving&#x27;, Michael Fischer was, and how he raised the standards and quality of work of those around him. If he has discussed this successfully at length with his father and if his upbringing instilled in him some of the qualities Mermin attributes to his father, Matthew Fischer should be on to something interesting.<p>Let me share some quotes from that speech:<p>&quot;Wagner and I had tried to explain to Michael that an argument of Pierre’s could be adapted to prove that there could be no spontaneous magnetization in the 2-dimensional Heisenberg model. I hadn’t known Michael for very long at that point, and one of the first things I learned was that you should think twice before claiming to <i>prove</i> something in front of a man who encourages postdocs to show that the free energy <i>exists</i>. He didn’t believe a word of it. Spectral functions, indeed! How did we know those frequency integrals even converged? It soon became evident that we were dealing with a man who knew nothing about quantum field theory, didn’t care one bit that he didn’t, and was convinced that we would be better off ourselves to forget it. Immediately.<p>So in the face of this astonishing attack, we worked backwards, unbundling the result from the conceptual wrappings in which it was enshrouded by some of the great thinkers of the previous decade, peeling off layer after layer, day after day, in the face of unrelenting skepticism, until finally we had it down to a trivial statement about finite dimensional matrices.<p>And then an astonishing change took place. “Publish!” he practically shouted, “it’s very important!” and having learned what it was like to be at the end of a Michael Fisher attack, I suddenly learned what it was like to have him on your side. Freeman Dyson came to town. Michael introduced us. “Mermin and Wagner have proved that there’s no spontaneous magnetization in the 2-dimensional Heisenberg model,” Michael proudly informed him, as Herbert and I basked in his admiration. “Of course there isn’t.” Dyson responded. “But they have <i>proved</i> that there isn’t” Michael insisted. One Dyson eyebrow may have moved up half a millimeter in response. No matter. I was hooked on arguing with Michael Fisher. My life would never be the same.&quot;<p>On Mermin and Ashcroft&#x27;s textbook (one of the best in the field):<p>&quot;One person, however, has influenced almost every chapter. Michael E. Fisher, Horace White Professor of Chemistrry, Physics, <i>and</i> Mathematics, friend and neighbor, gadfly and troubadour, began to read the manuscript six years ago and has followed ever since, hard upon our tracks, through chapter, and, on occasion, through revision and re-revision, pouncing on obscurities, condemning dishonesties, decrying omissions, labeling axes, correcting misspellings, redrawing figures, and often making our lives very much more difficult by his unrelenting insistence that we could be more literate, accurate, intelligible, and thorough. We hope he will be pleased at how many of his illegible red marginalia have found their way into our text, and expect to be hearing from him about those that have not.&quot;<p>&quot;What does Michael Fisher do when he checks into a hotel room for a night? He rearranges the furniture. He’ll rotate the bed 90 degrees, put the TV in the closet to make more room on the desk, carry the desk over to the window to get more light. He is an inspiration to me. Often I find it valuable to ask myself at difficult moments, what would Michael do? This strategy is not to be confused with that of the “What Would Jesus Do?” movement, though a comparison can be interesting. Often the two questions can lead to quite different answers.<p>Let me give you a recent example of the benefits of asking “What would Michael do?” A few years ago I was at the annual meeting of the Danish Physical Society which took place at a small conference center south of Copenhagen. Each conferee had a little apartment with a tiny attic. Downstairs was a living room and bathroom. Up a narrow ladder was a built in bed in a room with no light. Since one used the apartment only at night this was an irritating arrangement. I don’t know how Jesus would have coped, but it was pretty clear to me what Michael would have done. So I dragged the mattress and bedding down the ladder, remade the bed on the living room floor, and never climbed up to the attic again. This solution would not have occurred to me if I had not asked myself ”What would Michael do?”<p>The next day various Danish conferees complained about the arrangement. Ah, I said, under such trying circumstances you should always ask yourself what Michael Fisher would do. That night the air was filled with matresses hurtling down ladders. I believe there is now a flourishing ”What Would Michael Do?” movement among the Danish physicists.&quot;<p>[0] <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;cond-mat&#x2F;0211382" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;cond-mat&#x2F;0211382</a>
Ask HN: What does a C++ fullstack mean to you?
Found the job posting, &quot;Sr Engineer - Embedded Software in Santa Barbara, CA at Arthrex&quot;.<p>Here&#x27;s the full text:<p>JOB DESCRIPTION<p>Requisition ID: 24241<p>Title: Sr Engineer - Embedded Software<p>Division: Arthrex California Tech<p>Location: ACT Santa Barbara, CA<p>We at Arthrex are looking for an amazing Embedded Application Developer with a solid background in modern embedded application development to join our growing team. We are looking for someone who is self-motivated and who strives for greatness in all aspects of embedded development from low level systems all the way to front-end development. You will be joining a talented group of software developers at a global medical device company and deliver products and tools that help surgeons and their staffs provide great surgical outcomes for their patients. This is an opportunity to make an immediate and lasting impact in all phases of the application development lifecycle.<p>Candidates must take extreme pride in delivering software that provides great value, is scalable, and easy to maintain. Applications are typically written for Linux in C&#x2F;C++, but we do also use some Python and Java (specifically Android) for our User Interface. We are big proponents of open source technologies and other technologies that we are currently using include SQLite, Redis and JSON. An ideal candidate will feel comfortable contributing to all aspects of our stack. We work closely with all members of the larger team including Operations, Mobile Developers, UI&#x2F;UX, QA and project management teams.<p>About You:<p>Passionate about software development, specifically web technologies and web services Motivated, loves to learn, and thrives in a dynamic environment Has a track record of building applications and bringing them to production Wants to be part of a high performing team that makes a difference Some Details:<p>Likes to take an active role in all stages of the application development: conceptualize, design, build, test and release Excellent C&#x2F;C++ programming language skills with Python experience a plus Strong experience with multi-threaded application design Knowledge of embedded programming environment with open source tools Strong Linux operating system skills Java programming for Android would be great, but not required Comfortable using Git Experience with Agile development methodologies About Us<p>Arthrex is a global medical device company and a leader in new product development and medical education in orthopedics. As the software development team, we create innovative products to help support the company in education and research opportunities for our customers.<p>Main Objective:<p>Responsible for full life cycle development of Class I and II medical devices, which includes architectural design, interface design, analysis and simulation, prototyping, design assurance testing, development through production release, and product maintenance. Recognized as technical leader and resource.<p>Essential Duties and Responsibilities:<p>Lead software architect and specification developer to ensure robust, sustainable and scalable design approaches that meet design intent. Lead the design &amp; development of various subsystems of complex multi-process architecture. Effectively identify &amp; mitigate potential risks during course of projects Define &amp; develop reliable, efficient &amp; reusable software components in C&#x2F;C++ for Linux targets Identify key system performance bottlenecks, propose effective and scalable strategies to address them, and incorporate these strategies into a programming environment, with emphasis on run-time software layers including drivers, middleware, and APIs. Architect, develop and maintain defined software interfaces with hardware components and firmware. Design optimization through modeling, simulation and analysis. Experience with agile methods as they relate to software development and SCM practices. Input and direction to other members of the engineering staff to assist them in their assignments and provide them with learning experience. Support the development team to ensure the team exceeds expectations &amp; delivers high quality solutions on schedule. Coordination with in-house and contract developers in distributed development environment. Provide expert consultation in one or more areas of design, development, and implementation of technical products or systems. Recommend alterations to development and design to improve quality of products and&#x2F;or procedures. Support development of budgets and timelines for projects. Key technical contributor to multifunctional new product project teams through project technical feasibility analysis, initiation, planning, execution, and termination, adhering closely to project timeline and budget. Support design history file deliverables for assigned projects, adhering to design control procedures. Provide Regulatory department technical support for assigned projects as needed. Support Marketing and Product Management with technical information to be used for training and marketing of assigned products. Support surgeon and distributor customers by training and&#x2F;or educating on technical aspects of assigned products as needed. Report progress and status of assigned projects on a timely basis. Some required domestic travel to attend trade shows and visit established accounts as well as prospective accounts. International travel may be required. Incidental Duties:<p>The above statements describe the general nature and level of work being performed in this job. They are not intended to be an exhaustive list of all duties, and indeed additional responsibilities may be assigned, as required, by management.<p>Education and Experience:<p>Minimum of a Bachelor of Science Degree in Computer Science or a related technical discipline required; MS preferred.<p>Minimum of 7 years of relevant product development experience is required.<p>Knowledge and Skill Requirements&#x2F;Specialized Courses and&#x2F;or Training:<p>Expert experience in Linux OS Experience in embedded system design, bring-up, debugging, analysis and performance tuning. An understanding of the design issues and tradeoffs at the hardware&#x2F;software boundary in real-time, high-performance systems Expert C&#x2F;C++ programming and problem solving skills. Python and Java experience desired but not required. Database (SQL) experience desired but not required. Multithreaded Linux systems programming experience Strong troubleshooting skills on hardware, Linux configuration, peripheral device and network configuration Knowledge of embedded programming environment using open source tools. Expert skills in debugging, troubleshooting, and system optimization Well-versed in Unit Test Creation and working with continuous Integration environments (Jenkins) Experience with, knowledge of and discipline in standard System Development Lifecycle practices including translation of business requirements into a System Design document, Source Code Version Control (Subversion, Git, etc.), and maintenance. Experience working in an Agile environment (Scrum, Lean or XP) Experience with Test-Driven Development desired but noe required. Experience conducting and participating in Code Review sessions Excellent problem solving skills and strong verbal&#x2F;written communication skills Machine, Tools, and&#x2F;or Equipment Skills:<p>Knowledge of surgical equipment and instrumentation, hardware deguggers, software development environments and degugging tools, static code analysis, network protocols and hardware interfaces..<p>Bench top testing and troubleshooting with typical lab equipment.<p>Reasoning Ability:<p>Ability to define problems, collect data, establish facts, and draw valid conclusions. Ability to interpret an extensive variety of technical instructions in mathematical or diagram form and deal with several abstract and concrete variables.<p>Mathematical Skills<p>Ability to comprehend and apply mathematical principles to the degree required to perform the job based upon job requirements.<p>Language and Communication Skills:<p>Ability to comprehend and apply language skills to the degree required to perform the job based upon the job requirements listed above. Ability to verbally communicate ideas and issues effectively to other team members and management. Ability to write and record data and information as required by procedures.<p>All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
One Road Out of Depression
Nutrition for me has been a big one. I&#x27;ve only very gradually realised the enormous impact my diet has had on my mood and cognition in general.<p>Sure, I&#x27;ve heard a million times before that a good diet, sleep[1], and exercise[2] have a great impact on one&#x27;s mood and brain function. And if I was ever challenged on it, I would have said that I believed it. But I never <i>acted</i> like I believed in it, until I actually started to change and improve each of these areas and felt the impact for myself.<p>So for anyone suffering depression (or other mental&#x2F;cognitive issues), I strongly recommend you take a very thorough and serious look at what you&#x27;re eating, and consider the possibility that you might be deficient in some nutrients that are good for you or maybe getting too many that are bad for you.<p>Nutritional advice is unfortunately all over the place, and it&#x27;s very difficult to find any kind of consensus on what&#x27;s actually good and what&#x27;s bad. Fortunately, you can simply experiment on yourself, and try various things that are widely regarded as &quot;healthy&quot; and see how they affect you (just thoroughly do your research first and <i>be safe</i>!).<p>Doing this does take motivation, something very depressed people don&#x27;t tend to have much of. So in whatever way works for you, you have to first get motivated enough to seriously want to make a change and do the hard work it takes to get there. Perhaps that way is medication[3] or therapy[4]. Once you have the real motivation to change, the really hard work begins.<p>In my case, all my life I had verious allergies which kept me from eating certain foods which I later found out were really critical for brain function. In addition, I was a really picky eater, and didn&#x27;t like to eat a lot of food which was good for me. That made it worse. Even worse yet, I didn&#x27;t take my diet seriously, and ate lots of things which I knew were bad for me and on top of that didn&#x27;t have a very varied diet.<p>All of that eventually caught up to me, and I suffered from a variety of medical conditions which I&#x27;m discovering are diet-related. I&#x27;m slowly making positive changes and am seeing impressive results. I&#x27;m still nowhere near where I want to be with my mental and physical health. But both are improving and I&#x27;ve finally gotten interested in diet and nutrition, investigating them, and am taking them seriously.<p>Thanks to improvements in nutrition, my mood has improved a lot, I am more motivated, and have a lot more energy than I used to. My physical health is improving also. As I eat more nutritiously, I hope to see even more benefits in the long run.<p>Some other tips which, I think, have saved my life over the years:<p>The most important one is the ability to gain perspective. A lot of depressed people tend to get stuck in a sort of tunnel vision and magnify their problems all out of proportion, thinking that theirs are the most important, only, and worst problems in the world. I believe my study of philosophy, psychology, religion, history, my experience in living abroad, and interest in the fate, outlook and suffering of others has repeatedly helped me to realize that my problems really aren&#x27;t so bad when compared to those of a lot of other people throughout the world and through history. Over and over again I&#x27;ve seen that it can always be worse, and in many ways even in my worst and darkest days, I&#x27;m very, very fortunate. At the same time I recognize that my pain is real, and can be very severe. But it will end. This leads to the next point.<p>Over the decades of my life, I&#x27;ve had many run-ins with depression. When I was young, it often felt like there was no light at the end of the tunnel, that the depression would never end, and that there was no way out. But eventually it did get better. This cycle has repeated many times for me now, but now I have evidence from my own experience that it always gets better. Time does heal all wounds. So now when I get depressed, I try to remind myself of that and try to keep perspective. I try to just make it to the next day.<p>I&#x27;ve found that meditation helps. It can help with mental and physical pain, and sometimes helps me to break out of of a cycle of feeling sorry for myself and dwelling on the past. But at the same time, I don&#x27;t believe it nor any of the other techniques here are a complete answer, as they can be a way of avoiding dealing with important issues, which should be dealt with in therapy.<p>Journaling helps. I&#x27;ve often felt a lot of relief by writing down what I&#x27;ve been feeling or thinking -- things that I had a hard time admitting to others. More recently, I&#x27;ve started using a portable voice recorder to just talk in to about the things that are on my mind, and do so much faster and more freely than I can write. That&#x27;s helped a lot.<p>There was a time when I was in therapy that I kept a dream journal, and analysed my dreams with the help of the therapist. I can definitely recommend that as a way of gaining insight in to one&#x27;s own mind.[5]<p>Talking with someone on a crisis hotline can help, but shouldn&#x27;t be used as a substitute for therapy.. more as an emergency measure. On the other hand, if you&#x27;re not in therapy and have no one to talk to, it can definitely be a lot better than nothing.<p>Also, I try not to dwell on the past, and rather look to the future. I try to learn the lessons that are there to learn from the past, and then move on. Looking to the past with the aid of a therapist, however, can be very constructive, and I consider that to be quite different and a lot better than simply going in an endless loop over the same events in the past on your own, without making progress and without learning anything. It&#x27;s that latter, unconstructive type of dwelling on the past that I try to avoid.<p>I try to be happy with myself, enjoy my own company, spend a lot of time pursuing my own interests, and seeking out new ones. This helps to deal with boredom, low self esteem, and loneliness, which have at times been major contributing factors to my depression.<p>Helping others can be a great way to get out of your own problems, to recognize how bad others have it, to feel solidarity with them, and to feel positive about making a difference and being needed. I can definitely recommend volunteering as a way to help oneself feel better in all sorts of ways.<p>Finally, what&#x27;s helped me a lot is to keep busy with something (like work and&#x2F;or hobbies). I don&#x27;t think this is ultimately super constructive, especially if you keep busy at the cost of introspection and really facing your demons and dealing with aspects of your life you really have to deal with. But it can very effectively keep depression at bay -- at least it has done so repeatedly for me.. until I burn out and am forced to take a reassessment of my life and deal with the issues I&#x27;ve been putting off. So I only reluctantly mention it here. The best, I think, is to keep busy with something that&#x27;s really fulfilling and is really in line with your highest ethics, goals, and motivations. I haven&#x27;t found my way to that yet.<p>[1] - Sleep is super important, and I try to get as much as possible because I instantly see the effect on my mood and my mind when I get little or bad quality sleep for a long time. Getting enough sleep (ideally about 10 hours for me) is very difficult when working in tech, at most jobs, and I see sleep deprivation as one of the major downsides of working in this profession.<p>[2] - I&#x27;ve experienced great improvement in my mood when regularly doing intensive exercise, like strength training and aerobic exercise. Unfortunately, I&#x27;ve not able to make a long-term habit of it. It&#x27;s worked for me in the past, though, and I intend to get back in to it soon.<p>[3] - I generally see antidepressants as emotional bandaids -- they can temporarily stop the bleeding, but won&#x27;t treat the underlying illness. They can also have some very serious side effects. One person I knew had their emotions dulled permanently by antidepressants. Another underwent serious negative personality changes while taking them. There have been many reports of even more serious side effects, including worsening depression and suicide.<p>[4] - I&#x27;m a great believer in therapy. But there&#x27;s no guarantee that any particular therapy or therapist will work. It may be necessary to try a lot of different ones until you find the one that works for you. Effective therapy can also take a lot of motivation and commitment to do the hard work on your part for the therapy to work. A lot of people think that therapy is like having a tooth pulled -- you sit there and the doctor does all the hard work. But that&#x27;s not how it works. <i>You</i> are the one that has to do the hard work. The therapist just facilitates, guides, and helps you along the way.<p>[5] - Whether dreams have significance and what that significance is is controversial. Some people think they are just random or meaningless, just reflect what&#x27;s happened in the day, or are just a way your mind has of processing experience and reinforcing memories, but I think they have a deeper meaning and are a way for the subconscious part of your mind to communicate with the conscious part (a Jungian view). I could write another very long post just on dreams, but I&#x27;ll spare you. If you&#x27;re interested, read up on the Jungian view of dreams, and that&#x27;s close to my view. Jungian therapy in general is the type I prefer, though I have some problems with it -- in particular, I&#x27;m not very big on all the myth stuff. But apart from that, I find it to be the most insightful and beneficial type of therapy for me.
Assange Statement on the US-Election
Assange Statement on the US Election<p>8 November 2016 By Julian Assange<p>In recent months, WikiLeaks and I personally have come under enormous pressure to stop publishing what the Clinton campaign says about itself to itself. That pressure has come from the campaign’s allies, including the Obama administration, and from liberals who are anxious about who will be elected US President.<p>On the eve of the election, it is important to restate why we have published what we have.<p>The right to receive and impart true information is the guiding principle of WikiLeaks – an organization that has a staff and organizational mission far beyond myself. Our organization defends the public’s right to be informed.<p>This is why, irrespective of the outcome of the 2016 US Presidential election, the real victor is the US public which is better informed as a result of our work.<p>The US public has thoroughly engaged with WikiLeaks’ election related publications which number more than one hundred thousand documents. Millions of Americans have pored over the leaks and passed on their citations to each other and to us. It is an open model of journalism that gatekeepers are uncomfortable with, but which is perfectly harmonious with the First Amendment.<p>We publish material given to us if it is of political, diplomatic, historical or ethical importance and which has not been published elsewhere. When we have material that fulfills this criteria, we publish. We had information that fit our editorial criteria which related to the Sanders and Clinton campaign (DNC Leaks) and the Clinton political campaign and Foundation (Podesta Emails). No-one disputes the public importance of these publications. It would be unconscionable for WikiLeaks to withhold such an archive from the public during an election.<p>At the same time, we cannot publish what we do not have. To date, we have not received information on Donald Trump’s campaign, or Jill Stein’s campaign, or Gary Johnson’s campaign or any of the other candidates that fufills our stated editorial criteria. As a result of publishing Clinton’s cables and indexing her emails we are seen as domain experts on Clinton archives. So it is natural that Clinton sources come to us.<p>We publish as fast as our resources will allow and as fast as the public can absorb it.<p>That is our commitment to ourselves, to our sources, and to the public.<p>This is not due to a personal desire to influence the outcome of the election. The Democratic and Republican candidates have both expressed hostility towards whistleblowers. I spoke at the launch of the campaign for Jill Stein, the Green Party candidate, because her platform addresses the need to protect them. This is an issue that is close to my heart because of the Obama administration’s inhuman and degrading treatment of one of our alleged sources, Chelsea Manning. But WikiLeaks publications are not an attempt to get Jill Stein elected or to take revenge over Ms Manning’s treatment either.<p>Publishing is what we do. To withhold the publication of such information until after the election would have been to favour one of the candidates above the public’s right to know.<p>This is after all what happened when the New York Times withheld evidence of illegal mass surveillance of the US population for a year until after the 2004 election, denying the public a critical understanding of the incumbent president George W Bush, which probably secured his reelection. The current editor of the New York Times has distanced himself from that decision and rightly so.<p>The US public defends free speech more passionately, but the First Amendment only truly lives through its repeated exercise. The First Amendment explicitly prevents the executive from attempting to restrict anyone’s ability to speak and publish freely. The First Amendment does not privilege old media, with its corporate advertisers and dependencies on incumbent power factions, over WikiLeaks’ model of scientific journalism or an individual’s decision to inform their friends on social media. The First Amendment unapologetically nurtures the democratization of knowledge. With the Internet, it has reached its full potential.<p>Yet, some weeks ago, in a tactic reminiscent of Senator McCarthy and the red scare, Wikileaks, Green Party candidate Stein, Glenn Greenwald and Clinton’s main opponent were painted with a broad, red brush. The Clinton campaign, when they were not spreading obvious untruths, pointed to unnamed sources or to speculative and vague statements from the intelligence community to suggest a nefarious allegiance with Russia. The campaign was unable to invoke evidence about our publications—because none exists.<p>In the end, those who have attempted to malign our groundbreaking work over the past four months seek to inhibit public understanding perhaps because it is embarrassing to them – a reason for censorship the First Amendment cannot tolerate. Only unsuccessfully do they try to claim that our publications are inaccurate.<p>WikiLeaks’ decade-long pristine record for authentication remains. Our key publications this round have even been proven through the cryptographic signatures of the companies they passed through, such as Google. It is not every day you can mathematically prove that your publications are perfect but this day is one of them.<p>We have endured intense criticism, primarily from Clinton supporters, for our publications. Many long-term supporters have been frustrated because we have not addressed this criticism in a systematic way or responded to a number of false narratives about Wikileaks’ motivation or sources. Ultimately, however, if WL reacted to every false claim, we would have to divert resources from our primary work.<p>WikiLeaks, like all publishers, is ultimately accountable to its funders. Those funders are you. Our resources are entirely made up of contributions from the public and our book sales. This allows us to be principled, independent and free in a way no other influential media organization is. But it also means that we do not have the resources of CNN, MSNBC or the Clinton campaign to constantly rebuff criticism.<p>Yet if the press obeys considerations above informing the public, we are no longer talking about a free press, and we are no longer talking about an informed public.<p>Wikileaks remains committed to publishing information that informs the public, even if many, especially those in power, would prefer not to see it. WikiLeaks must publish. It must publish and be damned.
The forces that drove this election’s media failure are likely to get worse
As much as I find his persona to be ridiculously self aggrandizing and smug, Scott Adams made some pretty on point calls on this election, predicting Trump would win over a year ago. He&#x27;s got some metaphorical construct he calls the &quot;persuasion filter&quot;, with various notions about its components. <a href="http:&#x2F;&#x2F;blog.dilbert.com" rel="nofollow">http:&#x2F;&#x2F;blog.dilbert.com</a><p>He claims he recognized Trump as a &quot;master persuader&quot;, which once he started running. I confess I didn&#x27;t take it too seriously, but in his view &quot;master persuaders&quot; take advantage of various cognitive and emotional bias we have, often appealing to emotion.<p>In his view, when it comes to something like pursuading voters, appeals based on facts fail. Appeals based on tapping into the right emotions succeed.<p>Non of this theory is particularly novel, propaganda has been a major component of governing since time immemorial.<p>But he did hone in on Trumps ability to tap into these emotional motivational centers in a striking way.<p>I&#x27;ve been rather depressed about the election, with some passing moments of &quot;maybe it won&#x27;t be to bad.&quot;<p>Two days after, some of the shock has worn off. But I&#x27;ve never been quite so distressed by the outcome of an election. Which has prompted some self reflection: how much of my distress is based on the fact that my team lost, that Trump was successfully painted as a thug by Clinton campaign, who manipulated me similar to how Trump did his supporters?<p>I&#x27;m not sure how I&#x27;ll see things tomorrow, next week, next year. But as of tonight, I think the root of my distress is that Trump abandoned all pretense of a fact base campaign. His lies were total, said with impunity, no sense of shame whatsoever.<p>He comes closest to a sort of &quot;impressionistic truth&quot; in his ability to intuit people&#x27;s weak spots, their fear, their hypocrisy, their aspirations. Hence, he is a masterful bully, because his insults hone in on an emotional truth.<p>The downside to this ability, aside from the fact that he uses it to hurt others, is that he has no sense of irony, no insight to his own projection, no humility, and is easily baited. Small slights seem to rattle him almost more than big ones.<p>IMO, Hillary retains a strong comittment to living in a fact based reality. That&#x27;s one reason she lacks political charisma. Her attempts to elide a challenging issue, to convince an unsympathetic audience, her pre-planned &quot;zingers&quot;, her lies, are all so transparent, and she comes off as a phony, practicing on off putting brand of persuasion based around pandering, instead of convincing.<p>I think this is at least part of my alarm with what Trump has done, because there is simply no way to refute anything he says. He simply doesn&#x27;t care. His, statements are often vague, lacking specifics, nouns, big on pronouns, verbs, and meaningless superlatives.<p>When I hear this kind of rhetoric, I tend to get an uneasy feeling. I&#x27;m very sensitive to arguments that don&#x27;t make sense. I can feel it, and sometimes if I want to figure out what is wrong with the argument, I have to actively work through it.<p>I wasn&#x27;t immune to the entertainment value of watching this master bomb thrower destroy his bewildered primary opponents . Especially because his technique undermines the skills of a professional politician. Once Trump pulled back the curtain on his opponents clunky rhetorical machinery, they were powerless to get it back.<p>So I liked it, because watching professional politicians do their professional politician speaking is freaking annoying. The art of political language is to not provide any &quot;attack surface&quot; to your opponents. Trump just bypassed all that, because he is hyper agressive, and cares about winning far more than about how he &quot;looks.&quot;<p>Once that phase of the election was through, I found no appeal in Trumps meandering, stream of conscious boasting and hucksterism.<p>I guess I&#x27;m not the target audience. But watching people eat it up was disturbing, in light of the wildly inaccurate &quot;facts&quot; occasionally peperred in to his speeches. His campaign launched with a big one, characterizing most Mexican immigrants as rapists and drug addicts. I mean, that is demagoguery at its worst, shameful.<p>The sort of CW about what his supporters liked was that he &quot;told it like it was!&quot; To me, that indicates that that many of his supporters took his unabashed bullying as license to feel good about their own assholishness.<p>Political correctness gets a bad rap, but at its root it&#x27;s an attempt to manage difficult social dynamics and power structures, and is intended to protect groups that have been abused and marginalized by a dominant mainstream.<p>If you give up on the intention of political correctness, which is to help excluded individuals and groups become part of the mainstream, you are indeed being racist, ableist, intolerant, etc...<p>Everybody has strains of intolerance and prejudice, we&#x27;re all capable of hate, we&#x27;re all filled with fears and biases. That doesn&#x27;t make us &quot;racist&quot; in pejorative sense it&#x27;s used.<p>What does make someone a &quot;whateverist,&quot; is giving up trying to make marginalized groups part of the main stream, giving up the pretence to political correctness, of attempting to at least make the modicum of effort involved do demonstrate to an outsider that you are aware of their existence, and intend to create space for them. It depends on context. Making fun of someone or some group in private is different than in public.<p>So politically correct speech is disenguous, that&#x27;s how it works. Trump flouts these current norms in public! On purpose! His followers se that as him telling it like it is, because he is! He&#x27;s telling him about the inside parts of his psyche that are not nice, that don&#x27;t care, that want to put others down. Since we all have these elements inside us to some degree, his listeners feel affirmed for the negative parts of their psyche.<p>Instead of having to take the uncomfortable steps involved in trying to overcome one&#x27;s prejudices, of fighting against the negative aspects of the self, which are brought out when we have to try and get along with people different than ourself, Trump followers feel liberated to be their &quot;true selves&quot;.<p>This type of socializing only works well in homogenious social groups. The social bonds in such groups are easily reinforced by designating another set of individuals as &quot;other.&quot; Unfortunately, this type of scapegoating allows for socially sanctioned oppression and discrimination to occur.<p>A &quot;demagogue&quot; takes advantage of this psychology by helping define who is &quot;in the group&quot;, &quot;who is out&quot;, and normalizing these destructive, instinctual human behaviors.<p>There is a phenomena at play here in the &quot;red state&#x2F;blue state&quot; divide that is ironic. Blue staters, being just as human as red staters, are not immune to practicing a similar form of socially sanctioned discrimination and denigration of their own &quot;other&quot;: namely the red staters. This comes off as smugness, condescension, and is obvious to anyone on the receiving end of it. Hence, the awareness of this hypocrisy on the part of the blue stater, by the red stater, coupled with the a blue staters lack of insight of their own hypocrisy, or worse, the lack of attempt at even trying control and minimize their own &quot;scapegoating&quot; behavior, leads to an apparent moral equivalence of the two sides. The blue stater is taking advantage of the psychological balm of feeling superior to some one else.<p>The fact of the matter is that a Social Justice Warrior careening around, throwing bombs, and labeling people as racist can indeed be engaged in the same psychological Mia-behavior as a self-righteous born again Christian who justifies hating gays based on the bible.<p>If you have two, or more, groups of cultural rivals engaged in political battle, the fact that both sides engage in similar dysfunctional and antogonist behavior, doesn&#x27;t mean that core ideas or issues that drive each group are morally equivalent, or equally true.<p>A simple example might be illustrated by the truckers who like to &quot;roll coal&quot; and deliberately thumb their nose at the insufferable goody two shoes environmental &quot;activists&quot; trying to impose their concept of &quot;good behavior&quot; on others.<p>In this case, I find it easy to empathize with the coal rollers, but that doesn&#x27;t mean they are correct on what is needed to protect the environment.<p>This is a rather long winded response to the OP. The article is discussing the decreases power of the media to prevent group think from being reinforced by the filtering feedback loops of social media.<p>Trump has shown that for many voters, this is a quaint notion. In Trump they have found a leader who ignores facts in his rhetoric, and appeals purely on the level of emotions and psychological behaviors.<p>At core, this is what is so disturbing to me. Trumps indifference to facts, his exaltation of power and dominance will not lead to effective governing, and reflect almost a complete abdication of the responsibilities of a leader.<p>My guess is that motley crew of sociopathic right wing has-beens he has surrounding him will quickly move to take advantage of Trumps indifference to facts to exploit there positions in the new administration for their own ends.<p>Guliani, Gingrich, and Christie are the true &quot;deplorable&quot; here. Ruthless, amoral men with grudges to settle. These three have shown themselves to be completely indifferent to the needs of others. The lot of them have an appalling record of anti-social behavior. Unlike Trump, they they seem to operate in the fact based world. So when they lie, it&#x27;s at a very conscious level, calculated to increase their own power. Just looking at their collective record of behavior in their personal lives shows they have no regard for anyone but themselves.
Ask HN: I will quit my job as a PM to join a coding bootcamp. Am I crazy?
You&#x27;re absolutely insane and you need help. Seek professional advise from your school counsellor, if required. Below I wrote some paragraphs to help you. If you do this, someone should go, figure out who you are in real life and revoke your MBA. Or you should just request a refund of your $150k tuition.<p>If you really work in top 3 tech firm, your colleagues are Stanford&#x2F;Berkeley&#x2F;MIT&#x2F;CMU grads, who ate rice and ketchup writing operating systems and compilers. This will be the level you have to get to to contribute to any technical project in a meaningful way. But basically: when you were doing business case studies and traveling around the world to meet world leaders and learn from the best CEOs how they handled corporate crisis, they kept hacking algorithms in a black window with grey letters. I know--I did that too.<p>I have 15+ years of UNIX experience, 10+ years of programming experience and 5 years of real-world software engineering experience. I&#x27;m thinking how to get your job all day long :-) This requires reading tons of books: <a href="http:&#x2F;&#x2F;www.koszek.com&#x2F;reading&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.koszek.com&#x2F;reading&#x2F;</a> which I do and I sacrifice time with my gf because of that. If you look at this list it&#x27;s business, management, investments, people stuff. You&#x27;ll have to do the same, but read about software and hack code in front of the computer, etc.<p>Most of HN experienced crowd are smart technical people with worse credentials, who are attempting to move from coding job to architecture&#x2F;design&#x2F;management job, which you now have. You are our target :) You can see it in many threads. It&#x27;d be like moving from a coffee shop manager to a waiter. Ask some senior buddies from a random technical team whether they&#x27;d like your job. It&#x27;s good to see people appreciating each other professions, and we appreciate that too, but don&#x27;t do this job switch yet or we will crucify you :)<p>My advice is this: take a break from work (just because you can -- you have a shitload of money anyway, or maybe just &quot;enough&quot; due to loans; go on leave, but for god&#x27;s sake, don&#x27;t quit your job), join the bootcamp if you really like, and don&#x27;t tell anyone you did it. You see from this thread bootcamp, unlike your MBA, don&#x27;t have established value in the industry yet. It won&#x27;t harm you, for sure, but you won&#x27;t be a software engineer. Bootcamp sounds good for you--you&#x27;re a smart person and wants to learn more, since I think you haven&#x27;t hacked much code before.<p>Better idea is what other guys say: just get yourself all online programs there are. All of them. They are $30&#x2F;month at most, and Udacity is $200. Get them all for a month. You&#x27;ll spend maybe $200 + 5*$30 = $350, but maybe $0, since first 2-4 weeks are free. You&#x27;ll pick one that you like, because it&#x27;ll fit your learning style. Stick to it and just do exercises. On top of that get books. Books are dirt cheap compared to value they bring to your portfolio. Whichever books you need; all of them. I think you&#x27;ll have to end up doing bs I do for business&#x2F;management books: google &quot;top 10 programming books&quot; and get them. Even if you don&#x27;t read it, get it. It&#x27;ll be maybe $350, since you may end up getting 10 books. So it&#x27;s $350-$700 investment at most -- you&#x27;ll get a grasp on what&#x27;s going on. Then maybe $30&#x2F;mo for 6 months to teach you one thing well and 2 in a sloppy way you&#x27;ll kinda understand. And repeat it maybe 2 times. Basically: watch classes, do exercises and immediately after that write your toy programs on the side or (better) real products.<p>So you&#x27;ll spend at most $1k to learn something you want. It&#x27;s also spending $1k to save your $150k investment you&#x27;ve made by doing MBA in top 3 school. It&#x27;s 0.7% cost of what you&#x27;ve spent on school, and still much, much cheaper than a bootcamp.<p>If you&#x27;re a good MBA you&#x27;re a cheap, stingy bastard that can get people to pay for your stuff. And top 3 high-tech companies have educational grants. Some up to $9&#x2F;yr. They pay you for your education. So have the company pay for this bootcamp, if you come up with good enough reason, it&#x27;ll be all free.<p>Now:<p>If you absolutely truly love hacking code and are obsessed with it and you think that yes -- this is your 2nd calling and you basically don&#x27;t see yourself talking to people anymore - congratulations. You are eligible for leaving your job, retraining yourself and entering coding workforce. Trade your suit for dirty sweater and you&#x27;re all set to apply for a junior dev role.<p>But I think it won&#x27;t happen.<p>What will happen thanks to my advise instead is this: you&#x27;ll make yourself be in like 0.1% top category of MBA people who truly understand what software is all about AND have a PM job. People who you work with will see that, so you&#x27;ll be getting good reviews etc. You&#x27;ll come back to this thread, see how wrong about PMing you were, and you&#x27;ll apologise. You&#x27;ll then keep studying and getting better at being a PM and being technical manager. You&#x27;ll learn how to manage technical people, how to partition tasks, build features, build products and build tech companies. And no, it&#x27;s not very easy and it&#x27;s not fading away. I see you posted some stuff about machine learning in the past. If that&#x27;s your field of interest: go and get TensorFlow original publication from Google. It&#x27;s a paper where you see an abstract is shorter than a list of authors. Do you think these guys would have built Tensorflow without any PM of some sort?<p>Anyway, story goes: then you will quit your top 3 high-tech firm, and start a high-tech startup, get $20M in funding. And then you&#x27;ll come back here, DM me personally and offer me 1% in your new enterprise for having a profound impact on your life and you&#x27;ll offer me a PM role in your new startup. I&#x27;ll gladly accept it for this 1+ hr free of charge advising.
Ask HN: I will quit my job as a PM to join a coding bootcamp. Am I crazy?
I highly recommend you take an hour and a half to watch this Lynda.com video on Leadership Fundamentals. It&#x27;s focus is business, but first and foremost we are all leaders of our own lives and the lessons apply as much to personal as professional development. It may serve as a good refresher as to what you once learned makes for good leadership as a manager and help you identify your strengths and weaknesses as a leader of your life as well as in your organization. They have a ten day free trial and there are many videos you can watch after this one that will help you no matter what path you choose to follow. It is well worth the 90 minutes. It helped me understand and resolve a lot about myself and my dissatisfaction with own career, my organization, and what was wrong with both.<p><a href="https:&#x2F;&#x2F;www.lynda.com&#x2F;Business-Skills-tutorials&#x2F;Leadership-Fundamentals&#x2F;122471-2.html" rel="nofollow">https:&#x2F;&#x2F;www.lynda.com&#x2F;Business-Skills-tutorials&#x2F;Leadership-F...</a><p>Pay particular attention to the segments on emotional intelligence, motivation, engagement&#x2F;disengagement(!), and professional development. It&#x27;ll not only help you understand what is affecting your motivation and disengagement, but you may also realize your greatest strength is that you are motivated to solve these problems not just for yourself but others in your organization. Understanding the way things should be could just cause you to double down as a PM and master the challenges of the role while further developing your skills to advance your career.<p>Good leadership focuses on the health (and happiness) and growth of the individual as much as the organization. Maybe you are just in a poorly run organization or one that has poor leaders. Discontent is usually a pretty good indicator this is the case, because even if an individual is the problem, a good leader has the awareness to recognize this and have a good relationship with their staff that enables them to make the staff member aware of their strengths and weaknesses and the source of their discontent so it can addressed be remedied.<p>It sounds to me like you want to more authority and control, or at least more challenges and your organization&#x27;s leadership is not addressing that. Without that your growth and personal development is limited. Being a coordinator is little more being an admin that tracks status and doesn&#x27;t require much tactical or strategic decision making. Your manager should understand this and being stretching your abilities in every area little by little until they trust you to make those decisions. They should always be preparing you for the next level up. Maybe you aren&#x27;t aware of your weaknesses and they are not identifying them and working on them with you. Maybe your desire to quit and become an individual contributor is just an Escape Coping mechanism for dealing with stress instead of a Control Coping mechanism which is positive and proactive. Or maybe they just have poor leadership skills.<p>It would be terrible if you quit your job and lost the opportunity where you are to address your weaknesses and strengthen all of your skills in an attempt to start over because you feel it would give you more control over your happiness, especially given all of the capital you have built up in that role over the years. Besides, if the problem is your desire to leave is a just an unhealthy response to stress or a challenging situation, it won&#x27;t solve anything.<p>Just a thought, but one from someone who has been there.<p>What did I do in your situation? It took me a while but I watched that video, combined my awareness from it with formal knowledge of PMP (Head First PMP is good start) as a reminder of what project management is all about and what my organization did wrong as leaders of a functional versus projectized organization (a weak matrixed organization), and then I studied stress management to help me understand my own (unhealthy) responses to my situation.<p>If I had done that while still at my organization I would have not only felt empowered by the evidence and knowledge, but challenged to work on myself and the organization at the same time, and could have been perceived as someone with greater leadership potential that would have allowed me to level up. Even if I still decided to leave, I could have improved my skills while there to prepare for a move to a healthier organization that I would have been better equipped to recognize and excel at.<p>FYI I strongly considered a Masters in Data Science which would have had a narrow focus either as a developer, data analyst, or data scientist, but at the risk of competing with people who were younger and&#x2F;or smarter, and rooted in those disciplines from an earlier age. I decided I should master my management and leadership skills instead, leveraging and building upon my knowledge and experience at a better organization. I am unemployed and working on that now. Once I get a job, even if it&#x27;s a contract position, I can still take the online courses to get that masters degree from a top university and also take advantage of being around people in the industry that do what I aspire to do as mentors.<p>I sincerely hope this helps.
Ask HN: I will quit my job as a PM to join a coding bootcamp. Am I crazy?
Yes you are crazy and Please Don&#x27;t Do it. I&#x27;ll try to make the story short so bear with me. Almost 3 years ago I took a Bootcamp Online at Bloc.io, quite recommendable and I was supremely obsessed with learning. I needed to make a fast change in my life knowing that I was going to be unemployed in about 6 months, my wife was pregnant and I had to make a quick jump not to drown. I do not regret it at all, in fact, I greatly appreciate having learned so much with my mentor. You can not imagine how many things I did after investing +90 hours a week for 6 months. I still had to learn on my own for another year testing ideas before I was able to launch a startup that has allowed me to keep a decent income higher than the one I had before. But this scenario is not what I wished for when I started 3 years ago. I did not conceive that learning to code was a permanent and continuous process where you needed concentration and long hours of focus instead of playing with my daughter who is almost 3 years old now. I love writing code, words can&#x27;t describe what you feel when tests pass and you see it live in production (only to realize a few days later that you were reinventing the wheel and there was a better gem that did it faster and better). I thought this new world was my dreamed land where I could build things without asking anyone for help but that&#x27;s really the problem. That window I opened 3 years ago showed me that if I continue to develop the way that I did I would spend my life without actually living it as I wanted to. I confess my lines are not that sofisticated, I even consider them quite amateurish, my tests are light, and I guess If I would really want to be a pro developer I would have to invest much more and that sucks because you can&#x27;t think how many nights I&#x27;ve spent being away from my family because it truly is very absorbing. I do not say that it is bad idea to spend a life in front of a computer but man that is not suited for me.<p>I can see through your words that your interests are to participate in those great new ideas and to achieve greater things but you can make it happen without that Bootcamp especially without quitting your job. If you decided to attend to a good university it was because you really wanted to leave a dent on the universe. There are cases such as the orchestra director where although he does not play any instrument he must at least learn to play one to understand the essence of music before being able to lead. But in our ecosystem what is rewarded beyond our vision is our ability to execute.I believe that you as a PM have one the most valuable assets because you can take any talented team and give them a vision towards execution, that they will appreciate because people do not know how to execute complex Things step by step the way that you do. Besides that if you give them that strong desire to accomplish excellence that breaks the molds. And that stamina to persevere even in the darkest hours you are set.<p>If I could go back 3 years in time, I would ask for the refund even if I had to pay something. Instead I would use that money to spend more time observing regular people, maybe participate inside university activities, labs, study groups and invite complete strangers to take espressos or beers with students or teachers and bond with them. Instead of being a solo learner I would encourage myself to be a part of a group. My job would have been to create any initiative with them on the topics I sought were worth it and even if I knew it wouldn&#x27;t work I would put all my experience to support the implementation of these ideas. I might not knew how to type a line of code but I would have felt more alive to be able to see those ideas become a reality, even if it wasn&#x27;t 1% my vision, even if if wasn&#x27;t my code, even if I only was able to sit on a table with just one talented person and be able to keep him motivated and enthusiastic after the first prototype was made and help him with the first sale to come, that would have been enough. Instead of investing in Bootcamps you could interact more with humans, learn more about human behaviors, spend more time with your family or children, travel more to meet with more people and take advantage of the work you have right now to find mentors or talk with your peers about your expectations in those topics that you like to execute instead of being in front of your laptop fighting with your lines of code that someone you work with can do much better in less time.<p>Finally I have to say it surely was a blessing to become a developer, right no I&#x27;m working as CTO&#x2F;CEO but I &#x27;m about to let go, and I know it&#x27;s going to be hard, anyways in your current situation where you have much more knowledge and experience, only quit to carry out your ideas but if you definitely want to play don&#x27;t forget that sooner or later you&#x27;ll need to delegate and learn to orchestrate and for that you don&#x27;t need that Bootcamp.<p>P.S.: Forgive my bad English.
You Don’t Need a Master Plan, You Just Need to Start
So, the subject of the OP is doing a successful startup.<p>On that, three of my thoughts are:<p>First, nearly all first, wild guesses for a successful startup are doomed to fail.<p>Why? (A) Nearly no one wants the product&#x2F;service, say, the <i>results</i>, even for free.<p>(B) Lots of people like the results, but nearly all of the those people regard the price as too high.<p>We can call getting past both (A) and (B) as finding <i>product-market</i> fit.<p>(C) Okay, but, still, say, for a battery with 10 times the currently best in energy in KWh per kg of weight, we don&#x27;t know how to do that. Similarly for some software that is as <i>intelligent</i> as a human in all respects.<p>Second, to have some idea that a project won&#x27;t fail due to (A)-(C) or anything else serious can easily think of, one should have some good plans and, of course, check the plans as carefully as possible.<p>Then we can formulate a<p>Saying: If you fail to have a good plan, then you have a good plan to fail.<p>Lesson One: So, to avoid problems such as (A)-(C) and to improve chances of success, IMHO good planning is important.<p>Third, startups that are successful enough to make money for venture capitalists (VCs) and their limited partners (LPs -- the people the VCs get the money from) are rare.<p>How do we know this? A VC firm may look at 1000 unique proposals from entrepreneurs for each proposal they fund, and fund 20 proposals for each one that is very successful and actually makes money for the VCs and their LPs. So, for such a success, that VC firm has looked at 20,000 unique proposals.<p>Yes, if in some year there are a total of 20,000 unique proposals, there are 200 VCs, each proposal is sent to all the 200 VCs, each VC funds one proposal out of each 1000 they see, then each VC funds 20 proposals and in total there are (20)(200) = 4000 proposals funded of the 20,000. But, again, only one in 20 is successful for only 200 successes out of the 20,000 or 1 in 100.<p>So, depending on assumptions about the data, the chance of success from a proposal is 1 in 20,000 to 1 in 200.<p>So, in a word, the desired success is <i>exceptional</i>.<p>Lesson Two: To have one of the 1 in 20,000 proposals that is successful, instead of just luck, about the best approach is to have some good planning.<p>Is it possible to have some effective plans? Yes, e.g., there was the first Xerox photocopying machine, the first daisy wheel printer, the first good dot matrix printer, the first good inkjet printer, the first good laser printer, the first good program to drive such printers, the first good spreadsheet program.<p>Now, let&#x27;s look at the arguments of the OP:<p>&quot;But we have done something in the ecosystem to encourage this type of outlandish promotion ... where you feel like you need to use words like trillion.&quot;<p>Why is the planning for a trillion necessarily &quot;outlandish&quot;? We know that we are planning for something exceptional, and maybe good and careful planning, which is the kind we want, says that, really, if we do well and take all the market, then we do get a company worth $1 T. An investor would prefer the planning to be for $1 B or $1 M instead? Okay, if the company really looks like it could be worth $1 T, then it is easy enough to cut down the estimate to something much lower.<p>&quot;Reality is, that for every thoughtfully articulated and executed world domination master plan, most of the biggest and impactful companies started out with much more humble ambitions. Some just wanted to give students an alternative to a summer job. Others just wanted make their friends feel like pimps.&quot;<p>This situation is likely true but doesn&#x27;t say much:<p>Why? The situation says that most successes are from luck. Then the suggestion is to forget about planning and count on luck?<p>Here is an analogy that explains the situation: Go to a famous golf course and to a par 3 hole. Get the data for the past 10 years on who made a hole in one.<p>See, first, what fraction of the holes in one were made by (A) professional golfers and (B) everyone else. Will likely observe that nearly all the holes in one were made by (B), not the professional golfers but by everyone else.<p>How can this be true? Sure, there were only a few pro golfers but many more of everyone else. So, in the end, the luck of the many got more holes in one than the skill of the few.<p>See, second, what the probability of a hole in one was for (A) the professional golfers and (B) everyone else. Will likely find that the chances for the pros was at least 10 times higher than for everyone else.<p>So, we have that (i) nearly all the hole in one shots were from luck but (ii) the chances of a hole in one were much better for players with real skills.<p>So, if you were betting on a hole in one shot, then you should put your money on the pros with real skills.<p>Similarly for picking startup projects: Go with solid planning and not just with luck.<p>&quot;Most wouldn’t have cleared the hurdle of the billion dollar idea.&quot;<p>Fine: Discovering that fact is part of evaluating projects. But when do find a project that looks like it should be worth $1 T, don&#x27;t automatically throw it away as &quot;outlandish&quot;.<p>Sure, maybe on average the $1 T projects take more risk capital than the $1 B or $1 M projects, but guessing here is foolish and not necessary. Instead, the amount of risk capital needed should be part of the planning, the good planning that is believable. As we know from many projects -- long bridges, tall buildings, deep tunnels, big dams -- it is possible to plan big projects with accurate time and cost estimates.<p>Yes, time and cost estimates can be especially difficult for software projects, but just multiply both by a factor of about 20 to account for work not directly for the project but, say, for getting around bugs in infrastructure software, bad documentation, time to learn new APIs, computer system management Excedrin headache #228,884,454, etc. and might be closer to reality.<p>&quot;Over the years I’ve watch as that little company has grown from a couple thousand dollars a month to a couple million dollars a month. Next year, that unfunded family run business will do over $100M in revenue.&quot;<p>Terrific. But without good planning, that example represents some astounding good luck, close to winning a lottery ticket. Betting on lottery tickets is foolish for nearly everyone.<p>&quot;You may be surprised with how little ambition it really takes to eventually change the world.&quot;<p>Luck and lottery tickets are still poor bets.<p>Instead, we should have good planning. And if the plans point to a company worth $1 T, then check the plans numerous times and ways and then cheer.<p>Once a father quite successful in business told his son:<p>&quot;You have a lot of good ideas to invest $1 million and make $1 billion. Why not have an idea to invest $1000 and make $1 million.&quot;<p>Yes, but better still, have an idea to invest $1000 and make $1 B or $1 T.<p>It appears that here is the main reason for regarding $1 T plans as &quot;outlandish&quot; and discarding them: There are so far no $1 T companies.<p>So, in effect, this says that we can&#x27;t plan and practice to make a hole in one and instead should look like the non-pro golfers who made a hole in one with luck So, we should not plan but copy the <i>pattern</i> of the non-pro golfers, copy their brand of cubs, shoes, hat, shirt, etc.<p>Instead, it really is possible to have an idea for something really new and terrific, to have good plans to achieve it, and to achieve it essentially on time and on budget, e.g., as a low risk project.<p>Many of the best examples are from the all-time, unique, world-class grand champion of advanced information technology projects, the US DoD. For an example? Sure, GPS.<p>Betting on luck instead of planning? I remain surprised that people would suggest such a thing.<p>Or, when the given point does not make good sense, maybe there is a hidden point that does. Or there is the advice &quot;Always look for the hidden agenda&quot;. For VCs, one guess at a hidden agenda is publicity for more &quot;deal flow&quot;.
Shadows and Blur Effects in Modern UI Design
Blurring is a memory bandwidth and therefore power intensive effect, so it&#x27;s more costly in terms of battery live on mobile devices than other more local effects like darkening and desaturation. Of all the visual effects today&#x27;s gui design fads could choose to fetishize, blurring is unfortunately power hungry.<p>If &quot;Material Design&quot; was intended to conserve battery life, it shouldn&#x27;t have been so focused on looking blurry. ;) Of course you can draw simple soft shadows without actually blurring the content underneath, but Apple&#x27;s blurry iOS gui and macOS desktop windows using NSVisualEffectView&#x27;s &quot;Vibrancy&quot; and blurring effects require a lot of power.<p>3 Differences – Apple’s HIG vs Google’s Material Design Standards: <a href="http:&#x2F;&#x2F;nectardesign.com&#x2F;3-differences-apples-hig-vs-googles-material-design-standards&#x2F;" rel="nofollow">http:&#x2F;&#x2F;nectardesign.com&#x2F;3-differences-apples-hig-vs-googles-...</a><p>&quot;Apple believes that mobile devices should be seen as a window in to another world. They embrace infinite depth in their applications and use components such as their alert buttons and text messages with blurred background to create the feeling that the items are floating and exist in their own space. Another example of this would be the click wheel for their timer. See how the numbers recede in to the background? That would never happen in material design.&quot;<p>I don&#x27;t remember the exact link, but I read an article from Apple describing the GPU accelerated blurring and &quot;vibrancy&quot; effects they rolled out in their desktop user interface, which cautioned about how expensive they were, because the more you blur, the wider the region of support of the convolution kernel, therefore the more memory accesses you have to perform per pixel. Even on the desktop, they warned that it had a considerable cost. So it&#x27;s a pity that blurring has become so trendy on battery operated mobile devices.<p>Maybe it was this WWDC talk -- it contains lots of useful information about NSVisualEffectView performance:<p>Adopting Advanced Features of the New UI of OS X Yosemite: <a href="http:&#x2F;&#x2F;asciiwwdc.com&#x2F;2014&#x2F;sessions&#x2F;220" rel="nofollow">http:&#x2F;&#x2F;asciiwwdc.com&#x2F;2014&#x2F;sessions&#x2F;220</a><p>&quot;You want to use the active state explicitly, very sparingly.<p>It can affect performance and battery life, because if you have a lot of visual effect views around they&#x27;re always active, but you probably want to use it in places where you know that view is always going to be active and maybe it&#x27;s a panel that can&#x27;t become key for whatever reason.&quot; [...]<p>&quot;So you notice that blur we had and this may not surprise you but the blur effect isn&#x27;t exactly free. It does cost something, and that something is graphics performance and battery usage. And sometimes, though, the cost is worth the results. So, something you should be aware of here is you&#x27;re not trying to not use this effect. You want your app to look beautiful. You just need to pay attention to striking a balance between that appearance and the resource utilization.&quot; [...]<p>&quot;Corbin mentioned that layers are often required, especially for in window blurs, and layer usage is increasing just in general.&quot; [...]<p>&quot;If you add a lot of VisualEffectViews to your app and all of the sudden you notice that maybe your window resizing animations or your full screen transitions have become slow, you can set this [Accessibility Preference &#x2F; Display Subsection &#x2F; Reduce Transparency] to Yes, and this will avoid the cost we pay when doing that blur.<p>So if you notice when this is turned on your performance is fine, and when this is turned off your performance is kind of sluggish, it&#x27;s probably you&#x27;re using a VisualEffectView that&#x27;s too large or too many VisualEffectViews, and that&#x27;s a cue to dial down the transparency and blurring in the app.&quot; [...]<p>&quot;Something I want to point out about a lot of our drawing here is that the blur effect actually happens out of process, it happens in the Windows Server, and furthermore, it happens on the GPU, and that means that profiling your own process won&#x27;t necessarily tell you as much as you would hope.&quot; [...]<p>&quot;If you&#x27;re just doing a lighter development on your app, having activity monitor open can sometimes be useful.<p>It&#x27;s certainly not as in-depth as instruments but it will tell you how much CPU you&#x27;re using and more importantly it&#x27;ll tell you how much energy your app is taking to do what it&#x27;s doing.<p>And if you see that operating a little higher than you&#x27;re expecting that may be another cue that your VisualEffectView usage has gotten a little excessive.&quot; [...]<p>&quot;And finally, you folks remember like 40 seconds ago we talked a little bit about performance, and I do hope you&#x27;ll take some of those performance messages to heart when you leave.&quot;<p>This Ars Technica review of OS&#x2F;X 10.10 also discusses the cost of the blurring and vibrancy effects: <a href="http:&#x2F;&#x2F;arstechnica.com&#x2F;apple&#x2F;2014&#x2F;10&#x2F;os-x-10-10&#x2F;4&#x2F;" rel="nofollow">http:&#x2F;&#x2F;arstechnica.com&#x2F;apple&#x2F;2014&#x2F;10&#x2F;os-x-10-10&#x2F;4&#x2F;</a><p>&quot;These visual effects aren’t free; they cost CPU and GPU cycles each time the foreground or background changes. Behind-window blending is implemented in the window server, which is responsible for compositing all the visible windows into the final screen image. (This compositing process has been GPU-accelerated since 2002.) Moving a window that uses vibrancy in the behind-window blending mode does not require the application that owns the window to redraw any content; the window server handles the re-compositing as the background changes.<p>Within-window blending, on the other hand, is handled by each individual application. The system frameworks use Core Animation layers (also GPU-accelerated) to apply the necessary filters, blending, say, the contents of a scrolling view with the toolbar that partially covers it.<p>Given Apple’s recent focus on energy saving in Mavericks, it’s a bit strange to see Yosemite lean so heavily on a dynamic effect like vibrancy. To keep things from getting out of hand, vibrant views become opaque when inactive. In practice, this means only the front-most window in the currently active application—plus the Dock, menu bar, and any notification alerts that may dance across your screen—will burn cycles artistically blending foreground and background content.&quot;
An Intro to Integer Programming for Engineers: Simplified Bus Scheduling
What the OP is describing is, except for the coding, some now classic applied math, i.e., <i>operations research</i>. That applied math got used much less often than one might have expected because the (A) data gathering was too much pain, expense, and botheration, (B) there was too much software to write, and writing it was too clumsy and expensive, (C) the computing was too slow and too expensive, and (D) even when got a good solution ready for production, the production situation commonly changed so fast that the work of (A)-(C) could not change and revise fast enough to keep up. And, of course, the custom, one-shot software was vulnerable to bugs. Net, in practice, a lot of projects failed. Mostly successful projects needed big bucks and lots of unusually insightful sponsorship high in a big organization.<p>But, now (A)-(D) are no longer so difficult. This should be the beginning of a new <i>Golden Age</i> for such work.<p>Sure, since the results of such optimization can look darned <i>smart</i>, smarter than the average human, might call the work <i>artificial intelligence</i>. Really, though, the work is mostly just some classic applied math now enabled in practice by the progress in computer hard&#x2F;software.<p>The OP is a nice introduction to vehicle routing via integer linear programming (ILP) set partitioning. For the <i>linear programming</i> and the case of ILP, I&#x27;ll give a quick view below. But, now, let&#x27;s just dig in:<p>Here is an explanation of the <i>secret</i> approach, technique, trick that can work for vehicle routing and many other problems: The real problems can have some just awful non-linear cost functions, just absurdly tricky constraints, e.g., from labor contract work rules, equipment maintenance schedules, something can&#x27;t do near point A near lunch, even handle some random things, even responding to them in real-time (<i>dynamically</i>), etc. yet can still have a good shot at getting a least cost solution or nearly so. The &quot;nearly so&quot; part can mean save a lot of money not available otherwise. When there is randomness, then try to get least expected cost.<p>So, first, the trick is to do the work in two steps.<p>The first step call <i>evaluation</i> and the second, <i>optimization</i>.<p>From 50,000 feet up, all the tricky, non-linear, goofy stuff gets handled by essentially enumeration in the first part leaving some relatively simple data for the optimization in the second part.<p>In practice, this first step typically needs a lot of data on, say, the streets of a city and requires writing some software unique to the specific problem. The second step, the optimization, may require deriving some math and&#x2F;or writing some unique software, but the hope is that the step can be done just by routine application of some existing optimization software.<p>The OP mentions the now famous optimization software Gurobi from R. Bixby and maybe some people from Georgia Tech, e.g., from George Nemhauser and Ellis Johnson (long at IBM Research and behind IBM&#x27;s Optimization Subroutine Library (OSL) and its application to crew scheduling at American Airlines).<p>First Step.<p>Suppose you are in Chicago and have 20,000 packages to deliver and 300 trucks. Okay, what trucks deliver what packages to make all the deliveries on time, not overload any trucks, and minimize the cost of driving the trucks? You do have for each package the GPS coordinates and street address. And you have a lot of data on the streets, where and when traffic is heavy during the day, etc.<p>Okay, let&#x27;s make some obvious, likely doable progress: Of those 20,000 packages, maybe have only 15,000 unique addresses. So, for each address, <i>bundle</i> all the packages that go to that address. Then regard the problem as visiting 15,000 addresses instead of delivering 20,000 packages.<p>So, you write some software to <i>enumerate</i>. The enumeration results in a collection of candidate routes, stops, and packages to be delivered for a single truck. For each of those candidates, you adjust the order in which the stops are made to minimize cost -- so here get some <i>early</i>, first-cut, simple <i>optimization</i>. You keep only those candidates that get the packages delivered on time, meet other criteria, etc. You may have 1 million candidate single truck routes. For each of the the candidates, you find the (expected) operating cost.<p>So, suppose you have n = 1 million candidate single truck routes.<p>Also you have m = 15,000 addresses to visit.<p>So, you have a table with m = 15,000 rows and n = 1 million columns. Each column is for some one candidate route. Each row is for some one address. In each column there is a 1 in the row of each address that candidate route visits and a 0 otherwise. One more row at the top of the table is, for each column, the operating cost of that candidate route.<p>So, you have a table of 0&#x27;s and 1&#x27;s with m = 15,000 rows and n = 1 million columns. You have a row with 1 million costs, one cost for each column.<p>Again, you have 300 trucks. So, you want to pick, from the n columns, some &lt;= 300 columns so that all the m addresses get served and the total costs of the columns selected is minimized. That is, if add the columns as column <i>vectors</i>, then get all 1&#x27;s.<p>Second Step.<p>Well, consider variables x_i for i = 1 to n = 1 million. Then we want x_i = 1 if we use the route in column i and 0 otherwise. Let the cost of the route in column i be c_i. We want the total cost (TeX notation):<p>z(x) = sum_{i = 1}^n x_i c_i<p>to be minimized. So, right, we take the big table of m = 15,000 rows and n = 1 million columns and call it m x n matrix A = [a_{ij}]. We let m x 1 column vector b have all 1&#x27;s. We regard x as n x 1 where in row j = 1 to n is x_j. Then, we get linear program<p>minimize z(x)<p>subject to<p>Ax = b<p>x &gt;= 0<p>So, this is a case of <i>linear programming</i>.<p>Except in our problem we have one more <i>constraint</i> -- each x_i is 0 or 1, and in this case our problem is 0-1 integer linear programming.<p>Linear Programming.<p>In linear programming with n variables, with the real numbers R, we go into the n-dimensional vector space R^n. The<p>Ax = b<p>x &gt;= 0<p>are the <i>constraints</i>, and the set of all x that satisfies those is the <i>feasible region</i> F, a subset of R<i>n.<p>In R^n, a </i>closed half space* is a plane and everything on some one side of it.<p>Then F can be regarded as an intersection of m closed half spaces. So, F has flat sides, straight edges, and some sharp points (<i>extreme points</i>).<p>Well, if there is an optimal solution, then there is an optimal solution at at least one of those extreme points. So, the famous Dantzig simplex algorithm looks for optimal solutions in iterations were each iteration starts at an extreme point, moves along an edge, and stops at the next extreme point. That&#x27;s for the geometric view; the algebraic view is a tweak of the standard Gauss elimination algorithm.<p>Linear programming and the simplex and other algorithms have a huge collection of nice properties, including some surprisingly good performance both in practice and in theory.<p>But, asking for each x_i to be an integer is in principle and usually in practice a huge difference and gives us a problem in NP-complete -- at one time, this was a huge, bitter surprise.<p>Warning: At one time, the field of the applied math of optimization in operations research sometimes had an attitude, that is, placed a quasi-religious importance on <i>optimal</i> solutions and was contemptuous of any solutions even 10 cents short of optimal. Well, that attitude was costly for all concerned. Instead of all the concentration on saving the last 10 cents, consider saving the first $1 million. Commonly in practice, we can get close to optimality, may be able to show that we are within 1% of optimality, so close the rest wouldn&#x27;t even buy a nice dinner, and see no way in less than two weeks more of computer time to try to get an optimal solution.<p>So, concentrate on the big, fat doughnut, not the hole.<p>How to solve ILP problems is a huge subject -- e.g., can start with George Nemhauser -- but a major fraction of the techniques exploit some surprisingly nice properties of the simplex algorithm. Right, likely the best known approach is the tree search technique of <i>branch and bound</i>.
Go 1.8 toolchain improvements
There are scores of other optimizations [0] as well:<p><pre><code> Optimizations: bytes, strings: optimize for ASCII sets (CL 31593) bytes, strings: optimize multi-byte index operations on s390x (CL 32447) bytes,strings: use IndexByte more often in Index on AMD64 (CL 31690) bytes: Use the same algorithm as strings for Index (CL 22550) bytes: improve WriteRune performance (CL 28816) bytes: improve performance for bytes.Compare on ppc64x (CL 30949) bytes: make IndexRune faster (CL 28537) cmd&#x2F;asm, go&#x2F;build: invoke cmd&#x2F;asm only once per package (CL 27636) cmd&#x2F;compile, cmd&#x2F;link: more efficient typelink generation (CL 31772) cmd&#x2F;compile, cmd&#x2F;link: stop generating unused go.string.hdr symbols. (CL 31030) cmd&#x2F;compile,runtime: redo how map assignments work (CL 30815) cmd&#x2F;compile&#x2F;internal&#x2F;obj&#x2F;x86: eliminate some function prologues (CL 24814) cmd&#x2F;compile&#x2F;internal&#x2F;ssa: generate bswap on AMD64 (CL 32222) cmd&#x2F;compile: accept literals in samesafeexpr (CL 26666) cmd&#x2F;compile: add more non-returning runtime calls (CL 28965) cmd&#x2F;compile: add size hint to map literal allocations (CL 23558) cmd&#x2F;compile: be more aggressive in tighten pass for booleans (CL 28390) cmd&#x2F;compile: directly construct Fields instead of ODCLFIELD nodes (CL 31670) cmd&#x2F;compile: don&#x27;t reserve X15 for float sub&#x2F;div any more (CL 28272) cmd&#x2F;compile: don’t generate pointless gotos during inlining (CL 27461) cmd&#x2F;compile: fold negation into comparison operators (CL 28232) cmd&#x2F;compile: generate makeslice calls with int arguments (CL 27851) cmd&#x2F;compile: handle e == T comparison more efficiently (CL 26660) cmd&#x2F;compile: improve s390x SSA rules for logical ops (CL 31754) cmd&#x2F;compile: improve s390x rules for folding ADDconst into loads&#x2F;stores (CL 30616) cmd&#x2F;compile: improve string iteration performance (CL 27853) cmd&#x2F;compile: improve tighten pass (CL 28712) cmd&#x2F;compile: inline _, ok = i.(T) (CL 26658) cmd&#x2F;compile: inline atomics from runtime&#x2F;internal&#x2F;atomic on amd64 (CL 27641, CL 27813) cmd&#x2F;compile: inline convT2{I,E} when result doesn&#x27;t escape (CL 29373) cmd&#x2F;compile: inline x, ok := y.(T) where T is a scalar (CL 26659) cmd&#x2F;compile: intrinsify atomic operations on s390x (CL 31614) cmd&#x2F;compile: intrinsify math&#x2F;big.mulWW, divWW on AMD64 (CL 30542) cmd&#x2F;compile: intrinsify runtime&#x2F;internal&#x2F;atomic.Xaddint64 (CL 29274) cmd&#x2F;compile: intrinsify slicebytetostringtmp when not instrumenting (CL 29017) cmd&#x2F;compile: intrinsify sync&#x2F;atomic for amd64 (CL 28076) cmd&#x2F;compile: make [0]T and [1]T SSAable types (CL 32416) cmd&#x2F;compile: make link register allocatable in non-leaf functions (CL 30597) cmd&#x2F;compile: missing float indexed loads&#x2F;stores on amd64 (CL 28273) cmd&#x2F;compile: move stringtoslicebytetmp to the backend (CL 32158) cmd&#x2F;compile: only generate ·f symbols when necessary (CL 31031) cmd&#x2F;compile: optimize bool to int conversion (CL 22711) cmd&#x2F;compile: optimize integer &quot;in range&quot; expressions (CL 27652) cmd&#x2F;compile: remove Zero and NilCheck for newobject (CL 27930) cmd&#x2F;compile: remove duplicate nilchecks (CL 29952) cmd&#x2F;compile: remove some write barriers for stack writes (CL 30290) cmd&#x2F;compile: simplify div&#x2F;mod on ARM (CL 29390) cmd&#x2F;compile: statically initialize some interface values (CL 26668) cmd&#x2F;compile: unroll comparisons to short constant strings (CL 26758) cmd&#x2F;compile: use 2-result divide op (CL 25004) cmd&#x2F;compile: use masks instead of branches for slicing (CL 32022) cmd&#x2F;compile: when inlining ==, don’t take the address of the values (CL 22277) container&#x2F;heap: remove one unnecessary comparison in Fix (CL 24273) crypto&#x2F;elliptic: add s390x assembly implementation of NIST P-256 Curve (CL 31231) crypto&#x2F;sha256: improve performance for sha256.block on ppc64le (CL 32318) crypto&#x2F;sha512: improve performance for sha512.block on ppc64le (CL 32320) crypto&#x2F;{aes,cipher}: add optimized implementation of AES-GCM for s390x (CL 30361) encoding&#x2F;asn1: reduce allocations in Marshal (CL 27030) encoding&#x2F;csv: avoid allocations when reading records (CL 24723) encoding&#x2F;hex: change lookup table from string to array (CL 27254) encoding&#x2F;json: Use a lookup table for safe characters (CL 24466) hash&#x2F;crc32: improve the AMD64 implementation using SSE4.2 (CL 24471) hash&#x2F;crc32: improve the AMD64 implementation using SSE4.2 (CL 27931) hash&#x2F;crc32: improve the processing of the last bytes in the SSE4.2 code for AMD64 (CL 24470) image&#x2F;color: improve speed of RGBA methods (CL 31773) image&#x2F;draw: optimize drawFillOver as drawFillSrc for opaque fills (CL 28790) math&#x2F;big: 10%-20% faster float-&gt;decimal conversion (CL 31250, CL 31275) math&#x2F;big: avoid allocation in float.{Add, Sub} when there&#x27;s no aliasing (CL 23568) math&#x2F;big: make division faster (CL 30613) math&#x2F;big: use array instead of slice for deBruijn lookups (CL 26663) math&#x2F;big: uses SIMD for some math big functions on s390x (CL 32211) math: speed up Gamma(+Inf) (CL 31370) math: speed up bessel functions on AMD64 (CL 28086) math: use SIMD to accelerate some scalar math functions on s390x (CL 32352) reflect: avoid zeroing memory that will be overwritten (CL 28011) regexp: avoid alloc in QuoteMeta when not quoting (CL 31395) regexp: reduce mallocs in Regexp.Find* and Regexp.ReplaceAll* (CL 23030) runtime: cgo calls are about 100ns faster (CL 29656, CL 30080) runtime: defer is now 2X faster (CL 29656) runtime: implement getcallersp in Go (CL 29655) runtime: improve memmove for amd64 (CL 22515, CL 29590) runtime: increase malloc size classes (CL 24493) runtime: large objects no longer cause significant goroutine pauses (CL 23540) runtime: make append only clear uncopied memory (CL 30192) runtime: make assists perform root jobs (CL 32432) runtime: memclr perf improvements on ppc64x (CL 30373) runtime: minor string&#x2F;rune optimizations (CL 27460) runtime: optimize defer code (CL 29656) runtime: remove a load and shift from scanobject (CL 22712) runtime: remove defer from standard cgo call (CL 30080) runtime: speed up StartTrace with lots of blocked goroutines (CL 25573) runtime: speed up non-ASCII rune decoding (CL 28490) strconv: make FormatFloat slowpath a little faster (CL 30099) strings: add special cases for Join of 2 and 3 strings (CL 25005) strings: make IndexRune faster (CL 28546) strings: use AVX2 for Index if available (CL 22551) strings: use Index in Count (CL 28586) syscall: avoid convT2I allocs for common Windows error values (CL 28484, CL 28990) text&#x2F;template: improve lexer performance in finding left delimiters (CL 24863) unicode&#x2F;utf8: optimize ValidRune (CL 32122) unicode&#x2F;utf8: reduce bounds checks in EncodeRune (CL 28492) </code></pre> [0] <a href="https:&#x2F;&#x2F;github.com&#x2F;golang&#x2F;go&#x2F;blob&#x2F;master&#x2F;doc&#x2F;go1.8.txt" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;golang&#x2F;go&#x2F;blob&#x2F;master&#x2F;doc&#x2F;go1.8.txt</a>
I had a health crisis in France
Having had two melanomas and most likely to have more and having grown up in Denmark with universal health-care and now living in the US with insurance based healthcare I have been thinking about health care in general a lot.<p>Some random thoughts:<p>You can never spend enough on healthcare. There is always new machines, new technologies, new drugs, new treatment types, better educated doctors we could spend our money on if we wanted to. Furthermore we are treating people earlier and earlier and for more and more things. The old saying that if you are not sick it&#x27;s just because we haven&#x27;t found the right diagnosis for you seems to be true.<p>In effect whether you are in a private healthcare system or a universal one whether you pay double or your get taxed 100% there will never be enough money for healthcare.<p>Now depending on whether you have private healthcare or public healthcare the way you measure it is completely opposite. In a private healthcare system everything is a potential profit center. I.e. the more people who are sick the more money can you potentially make.<p>In a public universal healthcare system everything is a cost center. You have a budget and you have to deliver to a politically decided standard.<p>Both have pro&#x27;s and cons. To give you an example.<p>It took me 3 weeks to get a time with my dermatologist in Denmark, when i finally got it it was the day before I moved to the US. The Danish dermatologist found one they considered troublesome, but they couldn&#x27;t themselves do the biopsy and I had to get a time at a hospital to get it.<p>I decided to wait until I got to the US ignorant as I was I thought it was just a question of formalia. But no I had to wait a whole month for my insurance to work (that is a whole other discussion for another time)<p>When I finally got it though, I got a reference for a dermatologist same day and they did the biopsy, same day. Today I am at Sloan Memorial with one of the best dermatologist in the world getting checked every 3 months having a complete 3d scan of my body (in blue speedos and a white net) and hopefully we will be able to make sure that I am being managed properly.<p>What I am trying to say is that the level of expertise a private healthcare system allow for is more flexible than a public one because it allow for the allocation of resources. On the other hand if you look at those let fortunate than me, with worse healthcare plans etc they will get a less favorable treatment. I.e. the system isn&#x27;t evenly distributed.<p>What the public healthcare system secures is that it&#x27;s mostly evenly distributed but with less of a flexibility to build experts as there are budgets and a bigger need for priorities in any publicly funded system as it&#x27;s a cost center.<p>So you have fundamentally two system where one covers only those with insurance but allow them to potentially pay their way to the latest treatments with the best doctors and the other where everyone gets treated but you don&#x27;t have the same amount of experts and potential treatments.<p>Neither systems are really optimal. Do we want to have people die because they can&#x27;t get healthcare coverage or because they can&#x27;t get the necessary treatment because it doesn&#x27;t exist in the country they live. I know it&#x27;s more complicated than this of course but in broad strokes thats at least my perspective and this has lead me to the following observations.<p>1) Both systems are fundamentally financially unsustainable in the long run. Whether the system succumbs to it&#x27;s own weight by costing the tax payer too much to pay for everyone while only delivering average treatments or whether it&#x27;s impossible for the insurance companies to secure a large enough part of the population without leaving too many without proper coverage. Both just doesn&#x27;t sound &quot;right&quot; (I know Germany, and Switcherland have some variations that sound more right but I am not sure they don&#x27;t fall into the trap of either the cost center, or the insurance cost issues.)<p>2) One way to solve it is to ensure that people pay for all the normal encounters they have with the doctors (sore troth, hernia, back pain etc) but that you insure yourself against long term illness. In other word we should pay for normal things but no one should be going bankrupt because they can&#x27;t pay for long term or serious illness.<p>3) By removing the insurance part from a lot of the normal encounters with the healthcare system and only putting it towards more serious conditions hopefully doctors will start to compete against each other rather than spend all their time fighting with the insurance companies.<p>4) I have a naive hope that technology could somehow limit the cost of many of the more complicated treatments. Over time hopefully many of the things that are wrong with us can be treated via gene-therapy hopefully not requiring too many people to do the actual treatments.<p>5) I think we have to come to terms with the fact that none of the systems really work and that all of them have solutions to problems in the other systems. That way perhaps we can start to break down healthcare into more discreet parts rather than the giant monster that it is today.<p>Thoughts?
There's the Wrong Way and Jacques Pépin's Way (2011)
I&#x27;ve been shaving with a straight razor for almost 20 years. I&#x27;ve been using Japanese chef knives for over 10 years.<p>Using a straight razor, I know what sharp is and what sharp means. But I&#x27;ve learned over the years that I suck at sharpening and maintaining a blade, not least because I&#x27;m too lazy. When it&#x27;s an implement you use on your face several times a week it&#x27;s nigh impossible to fool yourself. I still own a high quality leather and canvas strop and an expensive Japanese water stone, but in retrospect those were aspirational purchases.<p>Some pro tips for people who want the performance but don&#x27;t want to or cannot invest the necessary time and effort.<p>Pro tip #1: For a straight razor, buy disposable blades. They&#x27;re actually _too_ sharp, making razor burn more likely. But a dull or deformed blade will cut you up like nothing else and becomes discouraging very quickly. It&#x27;s the rare person who will make it 20 years using a straight razor if they have to strop it every day and sharpen it every few weeks, so just use a disposable blade. I&#x27;ve been using Feather razor and blades for years. It&#x27;s the best of both worlds, just keep a light touch when shaving to avoid razor burn, especially the first couple of shaves out of the box.<p>Pro tip #2: Buy a high quality chef&#x27;s knife. It doesn&#x27;t need to be super expensive, just have a very hard cutting edge. I prefer the Japanese kind with a very hard (high 50s, low 60s on the Rockwell scale), carbon steel core sandwiched in softer stainless steel. Hard steel sharpens more easily and, most importantly, holds an edge better. However, it&#x27;s much more prone to microscopic breaks and cracks. Stainless steel is softer but more resilient--it deforms rather than breaks. It follows that you should never &quot;steel&quot; a hard, carbon steel edge; don&#x27;t buy a knife that is normally used with a honing steel, no matter how expensive or fancy.<p>Because we&#x27;re lazy and know we&#x27;re never going to sharpen it properly, if ever, the trick is remembering that an ounce of prevention is worth a pound of cure.<p>Use a cutting board made of _soft_ wood. (Or plastic, but I can&#x27;t speak to that.) There&#x27;s a reason the old-school hardwood boards were constructed with the end grain oriented up, but those are rare and quite expensive. Most hardwood boards will kill your knife in short-order, especially those tropical ones with embedded silica particles. I got one as a gift and only pretend to use it when they&#x27;re visiting. For a long time I&#x27;ve been using a cheap wooden board I bought from Giant supermarket. The glue is failing but it treats my knife kindly.<p>Use a clever or a cheap knife for deboning.<p>Never let anybody else touch it, ever. They won&#x27;t respect it like you do. They won&#x27;t understand.<p>Never toss it, or even it gently lie it down, in the sink or anywhere else the edge might accidentally so much as touch a hard surface. For obvious reasons, never put it in a drawer.<p>I&#x27;ve only gone through about 3 chef knives (2, really) in over 15 years, without ever so much as honing one, and they&#x27;ve always been sharper than anything I&#x27;ve ever used in any other household or kitchen. Including Redneck households where the sharpening stone is kept next to the easy chair.<p>My first one was a $50 special from an online store. It was incredibly sharp, and I used it for years. I was convinced it was magical, and for the first couple of years actually lamented I had no reason to use my sharpening implements. I traveled overseas with it. I made a dish at a party one time and made the mistake of leaving it lying around while I brought the dish to the table. Satan disguised as an innocent young lady used it to cut her brownies, which were in a glass baking dish. Not only did it destroy the knife, but there was a visible deep nick in the blade. No amount of sharpening would have restored it, as given the depth of the nick it likely would have required removing too much metal to be able to restore the original angle (Japanese knifes are like that). And in any event I certainly wouldn&#x27;t have had the skill, and few &quot;professional&quot; sharpeners would have the skill, either. My other knives were never as good as that one, but they&#x27;re still better than anything else I&#x27;ve seen.<p>Note that I&#x27;m not a professional chef. If I were a professional I don&#x27;t think there&#x27;d be any way to avoid learning and exercising proper maintenance. But I do cook regularly, and use my knife for almost every aspect of preparation. I&#x27;m one of those people who avoids fancy kitchen gadgets, which is actually an easy preference to adopt when your knife works well.
Ask HN: What do you use to build micro-front ends?
Microservices and Front-End<p>Microservices are becoming more and more popular and many are choosing to transition away from monolithic architecture. However, this approach was mostly limited to back-end services. While it made a lot of sense to split them into smaller independent pieces that can be accessed only through their APIs, same did not apply to front-end. Why is that? I think that the answer lies in technologies we’re using. The way we are developing front-end is not designed to be split into smaller pieces.<p>Server-side rendering is becoming history. While enterprise might not agree with that statement and continues pushing for server-side frameworks that “magically” transform, for example, Java objects to HTML and JavaScript, client frameworks will continue to increase in popularity slowly sending server-side page rendering into oblivion. That leaves us with client-side frameworks. Single-Page Applications are what we tend to use today. AngularJS, React, ExtJS, ember.js and others proved to be a next step in evolution of front-end development. However, Single-Page Applications or not, most of them are promoting monolithic approach to front-end architecture.<p>With back-end being split into microservices and front-end being monolithic, services we are building do not truly adhere to the idea that each should provide a full functionality. We are supposed to apply vertical decomposition and build small loosely coupled applications. However, in most cases we’re missing visual aspect inside those services. All front-end functionalities (authentication, inventory, shopping cart, etc) are part of a single application and communicate with back-end (most of the time through HTTP) that is split into microservices. This approach is a big advancement when compared with a single monolithic application. By keeping back-end services small, loosely coupled, designed for single purpose and easy to scale, some of the problems we had with monoliths become mitigated. While nothing is ideal and microservices have their own set of problems, finding production bugs, testing, understanding the code, changing framework or even language, isolation, responsibility and other things became easier to handle. The price we had to pay was deployment but that as well was greatly improved with containers (Docker and Rocket) and the concept of immutable servers.<p>If we see the benefits microservices are providing with back-end, wouldn’t it be a step forward if we could apply those benefits to front-end as well and design microservices to be complete with not only back-end logic but also visual parts of our applications? Wouldn’t it be beneficial if a developer or a team could fully develop a feature and let someone else just import it to the application? If we could do business in that way, front-end (SPA or not) would be reduced to a scaffold that is in charge only of routing and deciding which services to import.<p>I’m not trying to say that no one is developing microservices in such a way that both front-end and back-end are part of it. I know that there are projects that do just that. However, I was not convinced that benefits of splitting front-end into parts and packing them together with back-end outweights downsides of such an approach. That is, until I discovered web components.<p>Web Components<p>Web components are a group of standards proposed as a W3C specification. They allow creation of reusable components that can be imported into Web applications. They are like widgets that can be imported into any Web page.<p>They are currently supported in browsers based on WebKit; Chrome, Opera and FireFox (with manual configuration change). As usual, Microsoft Internet Explorer is falling behind and does not have them implemented. In cases where browser does not support web components nativelly, compatibility is accomplished using JavaScript Polyfills.<p>web components consist of 4 main elements which can be used separately or all together:<p>Custom Elements Shadow DOM HTML Imports HTML Templates Custom Elements<p>With Custom Elements we can create our own custom HTML tags and elements. Each element can have its own scripts and CSS styles. The question that might arise is why do we need Custom Elements when the ability to create custom tags already exists? For a long time now we can create our own tags, apply CSS styles and add behaviors through scripts. If, for example, we would like to create a list of books, both with Custom Elements and custom tags we would end up with something like following.<p>1 &lt;books-list&gt;&lt;&#x2F;books-list&gt; What web components bring to the table are, among other things, lifecycle callbacks. They allow us to define behaviors specific to the component we’re developing.<p>We can use the following lifecycle callbacks with Custom Elements:<p>createdCallback defines behavior that occurs when the component is registered. attachedCallback defines behavior that occurs when the component is inserted into the DOM. detachedCallback defines behavior that occurs when the element is removed from the DOM. attributeChangedCallback defines behavior that occurs when an attribute of the element is added, changed, or removed Shadow DOM<p>Shadow DOM allows us to encapsulate JavaScript, CSS and HTML inside a Web Component. When inside a component, those things are separated from the DOM of the main document. In a way, this separation is similar to the one we’re using when building API services. Consumer of an API service does not know nor cares of its internals. The only thing that matters for a consumer are the API requests it can make. Such a service does not have access to the “outside world” except to make requests to APIs of other services. Similar features can be observed in web components. Their internal behavior cannot be accessed from outside (except when allowed by design) nor can they affect the DOM document they reside in. Main way of communication between web components is by firing events.<p>HTML Imports<p>HTML Imports are the packaging mechanism for web components. They are the way to tell DOM the location of a Web Component. In context of microservices, import can be a remote location of a service that contains the component we want to use.<p>1 &lt;link rel=&quot;import&quot; href=&quot;&#x2F;services&#x2F;books&#x2F;books-list.html&quot;&gt; HTML Templates<p>The HTML template element can be used to hold client-side content that will not be rendered when a page is loaded. It can be, however, instantiated through JavaScript. It is a fragment of code that can be used in the document.<p>Microservices With Front-End<p>Web components provide a very elegant way to create pieces of front-end that can be imported into Web applications. Those pieces can be packaged into microservices together with back-end. That way, services we are building can be complete with both logic and visual representation packed together. If this approach is taken, front-end applications can be reduced to routing, making decisions which set of components to display and orchestration of events between different web components.<p>Now that we are equipped with (very) basic information about web components and desire to try a new approach to develop microservices, we can start building a microservice with both front-end and back-end included.<p>In the Developing Front-End Microservices With Polymer Web Components And Test-Driven Development series we’ll explore one of the ways to put discussion from this article into practice. We’ll use Polymer, Google library for creating web components, Docker, Docker Compose and few more tools and libraries. Development will be done using test-driven development (TDD) approach.
The Only Entrepreneurship Lesson You Need, with Do/Don’t Reading List
I posted this as a reply to Tucker&#x27;s post:<p>Tucker —  I’m glad to finally find a writer who calls out the uselessness of ideas and the importance of problems. Everything in this world operates by one fundamental principle plus the problems, just like in math. Answers are made by the problems&#x2F;questions, and that means it’s actually impossible to get an answer without having the problem. That makes answers without problems not only useless but harmful. They make society become darker over time in that it’s harder or impossible for people to judge right and wrong, and everything becomes ambiguous when the problems are not visible. But this essential information is totally missing from modern education.<p>If you don’t mind two pieces of minor feedback I would like to see if I can contribute to you and your audience somehow.<p>Re: start with good people. We can judge whether a thing is good or bad based on whether it has a good or bad result. But what are the concrete criteria of good and bad? How can we distinguish the good people from the bad? I have learned the answer, but I’m afraid to say it here as it’s hard for the majority to understand and to accept due to its nature. The criteria of good and bad is how true it is. If a person is more truthful, let’s say 51% truthful, then they are able to recognize 51% of cases they encounter correctly, and will be able to act knowing things correctly rather than through ignorance. I have heard the percentage of the results they come to get in life is directly and exactly proportional to and a function of the percentage of truth in their consciousness versus falsehood, i.e. their degree of truthfulness. One big problem nowadays is how we can distinguish truth from falsehood. We not only do not know our own degrees of truthfulness but humans can’t really distinguish good, true teachings from bad teachings. It requires a truly Enlightened Being (i.e. a Buddha) or an individual with an extremely high level of truthfulness to be able to tell true from false when they see the matter. That brings me to my second point.<p>You quoted D.T. Suzuki about Buddha’s teaching. The problem is that it has been thousands of years since Gautama Buddha came to this world. Have you heard of the game of telephone? Or seen what happens when a photocopy is taken of a photocopy? When Buddha appeared he told people the truth that he could see with his own eyes at the time. However, those who heard what he said had a huge gap in level of consciousness between them and him. So they couldn’t see and understand the truth precisely and couldn’t transmit it precisely. As a consequence they changed the meaning slightly and lost or deteriorated some of the truth. Over thousands of years, the Truth Buddha taught has been changed or deteriorated significantly. But one big problem is that before we learn what Buddha’s teaching actually is we don’t realize why the existing Buddhism is quite different than his real teaching. A second big problem is that it’s really quite impossible for people who believe in a lie (who have falsehood in their consciousness) to be able to understand Buddha’s teaching correctly. So this is a barrier and it is why I was initially anxious about posting this. I want to tell you that meditation, the five precepts, the eightfold path, and the four noble truths are not Buddha’s actual teaching. His teaching can be summarized in only two parts — very simple things actually–but throughout all of the thousands of meetings I’ve had with Buddhist monks they have never been able to answer what Buddha’s real teaching is. Firstly, Buddha never claimed he was enlightened through meditation, which makes sense because it’s not possible to produce enlightenment through meditation, itself. It would be like saying a tree could produce seeds by making leaves but no fruit. Meditation throughout all of history has never produced an enlightened being. Real enlightened beings can see and say the truth or answer any kinds of questions on the spot with totally concrete answers that can be verified. However people who rely on meditation alone can only give answers without problems. Secondly, the precepts. Buddhist monks can’t keep the precepts even from the moment they wake up in the morning. The precepts all generally have in common the theme “don’t lie”. Yet, they claim they know Buddha’s teaching and&#x2F;or they claim they are doing their diligence to find it out and to inform it correctly to society. The reality is different. They not only don’t know Buddha’s teaching but they really don’t want to know either. What is the difference between their teachings and Buddha’s real teaching? Look at the four noble truths and the eightfold “path”. The Noble Truths don’t actually have the truth in them. For example, life is not suffering, itself. Life has every way in it. If you live knowing life, life can cause itself to be pleasurable. If you live without knowing life, life exhausts itself. The noble truths also talk about the “end” of suffering. They say that’s Nirvana. However every living creature receives influence from its environment and every living creature suffers — even those who have experience Nirvana many times. Nirvana means that the individual finishes&#x2F;ends all of the agony and illusions. But when the Buddha went to a cold environment he still felt cold. When he didn’t eat, he felt hungry. When he went to a hot environment he still sweat. These things feel a certain way and constitute what he called suffering. How about the Noble Truths’ “way” to end suffering, the Eightfold Path? This one is funny. There’s literally no way in the eightfold path because they are all answers without problems. The way starts from the causes that exist in the problems and goes to the answers. But without teaching being based in problems&#x2F;questions there is literally no way of life to follow in those teachings and they can no longer be used for the purpose of a living being. They are only useful for the dead.<p>I have too much to say about the above. But I wanted to tell you the fact that Modern Buddhism is very, very different from what Buddha actually taught. Monks just put what they wanted into the scriptures. These days how can we verify the difference between monks’ words and the words of the Enlightened Being? We can’t, really, and Buddhism these days has devolved into nothing more than a religion. As a result they’re involved in causing society to deteriorate. Seeing this reality, who could possibly agree with me? And which monks would want this truth to come out? They rely on people for donations to survive. That’s one reason they recommend meditation. If someone sponsors them to meditate enough maybe they will be Enlightened? But it’s nothing more than deception.<p>If not meditation, how can people experience Nirvana and how can they attain Enlightenment? Gautama Buddha didn’t explain much about this. However he kept pointing out “what exists” (facts) and how the world operates and told people to learn it. That’s because he knew that “what is” is the way to make people be Enlightened. The problem is that people can’t recognize “what is” before they’re Enlightened. And without an Enlightened Being to reveal facts as they are, who or what can we learn from without pure trial and error, much like Gautama did?<p>Your article, having been able to distill success into three steps&#x2F;aspects, reminded me very strongly of the answer to that question which I got from a man who claims that he attained a Perfect, supreme Enlightenment some 30–40 years ago in 1984. He has since passed away but he left his teachings for free. I wanted to share his short explanation of the four steps by which Enlightenment can be achieved, here. I hope you can read and enjoy it and that you can get something great out of it.  — -<p>First, ”To achieve Enlightenment you must first be free from lies.” That is, truth must appear. Having to get rid of lies to achieve Enlightenment means having to open your eyes to truth.<p>Second, ”You have to see ‘what is’ ” Where is right and wrong?<p>Right and wrong don’t exist in words, they are appearing through ‘what is’. When a good thing occurs its a good thing and when a bad thing occurs it’s a bad thing. No matter how good we say something is, if we don’t know ‘what is’ it’s difficult to make something good happen. So you have to see what is. You have to know how ‘what is’ comes to be. You have to know how the law of cause and effect is making ‘what is’ better or worse. You have to know the meaning inside it. Third, ”There must be conscience and courage.”<p>I constantly emphasize that there is nothing as difficult or as lonely as revealing ‘what is’ in the world. The lives of the saints in the past was like that and we can also see that in our society there were many people with the correct way of thinking who tried to make the world better who were like that. If you want to make the world better you have to teach ‘what is’ but those who tried to teach ‘what is’ properly were all abandoned in the world. So it means that if there is no conscience and courage, no matter how much they have opened their eyes to ‘what is’, they can’t do anything about it.<p>Fourthly, ”There has to be endless love inside oneself”<p>There has to be endless love to go to others and teach them. There has to be conscience and courage to endlessly want to go to others and teach them. It won’t happen if either one of these two are absent.<p><a href="https:&#x2F;&#x2F;tathagatablog.wordpress.com&#x2F;2008&#x2F;11&#x2F;01&#x2F;the-way-of-enlightenment&#x2F;" rel="nofollow">https:&#x2F;&#x2F;tathagatablog.wordpress.com&#x2F;2008&#x2F;11&#x2F;01&#x2F;the-way-of-en...</a>
Ask HN: I am 30yrs and never had a full time job, now suicidal. Any life advice?
This won&#x27;t be popular on HN, but my job is to speak the truth and eat the downvotes if necessary. :)<p>First, the truth is that God loves you and doesn&#x27;t want you to end your life. I don&#x27;t know what you believe, but I am firmly convinced that that is the truth. And being so, it overrides everything else in life. No matter how bad things seem, no matter how bad things actually are, this life is not all there is, and if we wash our sins away and live faithfully, we will inherit eternal life. Not only does that give us hope for the distant future and in the next life, but it gives us hope here and now, today, because God loves us and wants what&#x27;s best for us. Note, this is not a prosperity gospel--what God wants most is for us to be faithful, and so he does not promise us an easy life, material wealth, or even good health. But he does promise to give us what we need.<p>Secondly, from a worldly perspective, there are people who have overcome much worse circumstances than you are in right now, to achieve their dreams, prosperity, success, or just plain happiness. So there is empirical evidence that you have hope for the future.<p>Thirdly, try to take a step back from yourself and your current feelings. Try to recognize that how you feel right now is not necessarily how things actually are. We humans are funny creatures, and our minds can run away from us, leading us down dark, hypothetical paths that may bear no relation to reality.<p>I don&#x27;t know more than what you have said, so it might be that you have felt this way every day for a long time, or it might be that you have good days and bad days. Either way, you may feel differently later today, tomorrow, next week, etc. Sometimes a good night&#x27;s sleep is all I need to snap out of a blue mood I find myself in when I get tired and stressed. It used to be that I would spiral further down and down, but sometimes now I recognize that I&#x27;m not being rational, that I am tired or hungry or stressed, and that I&#x27;ll probably feel differently tomorrow--and I usually do.<p>So it might be that, at this particular moment, you are at an acutely low place, but it might be just a few hours until you&#x27;re at a more even place. Don&#x27;t be too hard on yourself. Give yourself some time. Take care of yourself. Eat a good meal, go to bed early (like, several hours early, give yourself plenty of time to get extra sleep), and give yourself the best chance at tomorrow.<p>One of the easiest patterns to fall into when depressed is to focus on oneself. It&#x27;s really easy to do this when you&#x27;re alone. It may help to envision the future life you would like to have, the future family you would like to have, etc, and think of yourself as preparing to live that life, preparing to serve those people. That means that you need to take care of yourself now so you can take care of others in the future. You have many years ahead of you and can do much good in the world. You can make many others&#x27; lives better with your time, body, and mind. Think of yourself as a potential force for good in the world, and consider yourself in training to serve.<p>Finally, if you have gone so far as to make plans for ending your life, you are at the point that you need help immediately. I don&#x27;t know anything of Germany&#x27;s social systems, but I&#x27;m sure there is a hotline or something like it that you could call or reach out to for help. Stop what you&#x27;re doing and make that call right now. You owe it to yourself and those whom you will serve in the future to save your life now. It won&#x27;t be easy, but it&#x27;s the right thing to do, and you are strong enough to do it. I know you are, because you&#x27;ve already reached out here. Don&#x27;t think about it, don&#x27;t rationalize about it, just do it. All the other stuff can wait and can come in time. Take care of yourself now. Take a step back and consider yourself a friend in need of your help, and do what you would do for a beloved friend.<p>Here is some information I found for Germany. Please go here and reach out to one of them now: <a href="http:&#x2F;&#x2F;www.suicide.org&#x2F;hotlines&#x2F;international&#x2F;germany-suicide-hotlines.html" rel="nofollow">http:&#x2F;&#x2F;www.suicide.org&#x2F;hotlines&#x2F;international&#x2F;germany-suicid...</a><p>I hope some of this is encouraging to you. I will be praying for you. Let me know if you&#x27;d like to talk privately out-of-band, and I&#x27;ll be happy to correspond via email, etc.
Ask HN: How to transition from worker to manager?
You are identifying and acknowledging some limitations, which as others have said, is a great first step. I have also followed a similar path to you, and moved into a management role a while back. I&#x27;ve put together a few thoughts and my attempts at keeping this comment short hasn&#x27;t been successful, nevertheless, I hope it helps you in some way.<p>One of the things that helped me greatly was recalling behaviours of managers that I thought highly of and emulating them, as well as thinking about poor management practices I have been on the receiving end of and making sure that it doesn&#x27;t happen again. The idea of this exercise is for you to have a clear vision of who you want to be as a manager. This can be as shallow as how you want to be perceived, or go as deep as how you want to act every day.<p>Once you understand the type of manager you want to be, you can now decide how you want to evolve that idea of yourself based on your experiences and everything you learn. How you communicate your growth as a manager is also something that you should consider. I have worked with management lecturers that advocated authentic leadership, and it is something I try to live by.<p>Being an authentic leader for me meant being honest with my team when I don&#x27;t know something, when I&#x27;m wrong, and when I&#x27;m making a serious attempt to change how I operate as a manager. As an additional consideration though, I manager I respect gave me the advise that such actions can be seen as weakness by other managers and used against me; I made a conscious decision to continue the practice, but you need to decide whether your environment will be receptive to how you want to operate.<p>This leads to one of the first points you raised, where you wrote &quot;I&#x27;m having a hard time with the manager role transition as I enjoy getting my hands dirty and diving in to solve problems.&quot; I think it&#x27;s important to understand that as a manager, you no longer have a single responsibility&#x2F;obligation to the manager you are reporting to. As a manager your dual responsibilities are to ensure your team are working optimally whilst making sure your manager and the greater organisation know about it. I cannot stress enough that this is more than a full time job.<p>What changed my perspective about meetings is the following thought - do I want my most productive team members to be in meetings, or do I want to be the filter that ensures only the most useful information goes through. Being a filter can take many forms. It may mean sitting in on exploratory meetings to determine how serious the organisation is about a new task&#x2F;project, and throwing in a business analyst to further test the waters. Or it may mean immediately pulling your top engineer of their current task to help the company with an incident.<p>In regards to your comment about &quot;knowing how much information to provide, how much to expect&quot;, I think it&#x27;s best to discuss this with your team. I tend to have a general rule that if I can&#x27;t imagine myself developing a solution based on the knowledge I have about the problem, then I need to continue dialogue with stakeholders before shifting the focus of my team. Although as you mentioned, being hands-off means you become detached to the realities (read challenges) of implementing working solutions in your environment. This is why I would recommend identifying a technical lead that is basically your 2IC and someone that keeps you technically grounded.<p>To shorten this comment, I will just conclude by saying that my view of management is that you can still play either a developer, or administrator role, but the system you are working on is the system of organisation that makes up your company, rather than code and servers. As a developer it means you are constantly looking for ways to improve products, ensuring people with the right skills are being utilised or promoted so that they can contribute towards outcomes that benefit all. As an administrator, your role will be to ensure business continuity, by ensuring knowledge is passed on and successions can occur with minimal disruption. All principles that apply to building and maintaining scalable systems all apply to your company, such as redundancy, reliability, efficiency, etc.<p>My final bit of advice is that you should be honest with yourself about whether the role is right for you. I&#x27;ve seen some people take to management like a new lease on life, while myself I have gone back to a development role with desires of being no more than technical lead or a technical founder, which is a completely different goal altogether.
Reddit cracks down on abuse as CEO apologizes for trolling the trolls
Short version:<p>* The steps to allow for censorship in the software and diminish the visibility of asshattery is a necessary thing. * It is unfortunate that it wasn&#x27;t done before. * It is a pattern that has repeated itself many times over the decades. * Read A Group Is Its Own Worst Enemy. * Web 2.0 puts too much work on too few people.<p>Long version:<p>One of the talks that was passed around (I think it was Everything2 that introduced me to it, but I could be wrong) is A Group Is Its Own Worst Enemy ( <a href="http:&#x2F;&#x2F;www.shirky.com&#x2F;writings&#x2F;group_enemy.html" rel="nofollow">http:&#x2F;&#x2F;www.shirky.com&#x2F;writings&#x2F;group_enemy.html</a> ). This goes to the problems and some necessary designs for social software - reddit is one such example.<p>The story in that talk that this current episode reminds me of is that of Communitree:<p>----<p>&gt; Communitree was founded on the principles of open access and free dialogue. &quot;Communitree&quot; -- the name just says &quot;California in the Seventies.&quot; And the notion was, effectively, throw off structure and new and beautiful patterns will arise.<p>&gt; And, indeed, as anyone who has put discussion software into groups that were previously disconnected has seen, that does happen. Incredible things happen. The early days of Echo, the early days of usenet, the early days of Lucasfilms Habitat, over and over again, you see all this incredible upwelling of people who suddenly are connected in ways they weren&#x27;t before.<p>&gt; And then, as time sets in, difficulties emerge. In this case, one of the difficulties was occasioned by the fact that one of the institutions that got hold of some modems was a high school. And who, in 1978, was hanging out in the room with the computer and the modems in it, but the boys of that high school. And the boys weren&#x27;t terribly interested in sophisticated adult conversation. They were interested in fart jokes. They were interested in salacious talk. They were interested in running amok and posting four-letter words and nyah-nyah-nyah, all over the bulletin board.<p>&gt; And the adults who had set up Communitree were horrified, and overrun by these students. The place that was founded on open access had too much open access, too much openness. They couldn&#x27;t defend themselves against their own users. The place that was founded on free speech had too much freedom. They had no way of saying &quot;No, that&#x27;s not the kind of free speech we meant.&quot;<p>&gt; But that was a requirement. In order to defend themselves against being overrun, that was something that they needed to have that they didn&#x27;t have, and as a result, they simply shut the site down.<p>----<p>To me, Reddit is facing this exact same problem. It wants to be a place for free speech, but the right type of free speech. It also hasn&#x27;t designed the necessary infrastructure of code to allow the community of not t_d to defend itself and maintain the type of content that that core community wants.<p>And thus, backchannel slack channels to try to get people to tone it down a bit - because the software didn&#x27;t support the necessary structures to prevent it from happening.<p>That passage quoted above goes on:<p>&gt; Now you could ask whether or not the founders&#x27; inability to defend themselves from this onslaught, from being overrun, was a technical or a social problem. Did the software not allow the problem to be solved? Or was it the social configuration of the group that founded it, where they simply couldn&#x27;t stomach the idea of adding censorship to protect their system. But in a way, it doesn&#x27;t matter, because technical and social issues are deeply intertwined. There&#x27;s no way to completely separate them.<p>&gt; What matters is, a group designed this and then was unable, in the context they&#x27;d set up, partly a technical and partly a social context, to save it from this attack from within. And attack from within is what matters. Communitree wasn&#x27;t shut down by people trying to crash or syn-flood the server. It was shut down by people logging in and posting, which is what the system was designed to allow. The technological pattern of normal use and attack were identical at the machine level, so there was no way to specify technologically what should and shouldn&#x27;t happen. Some of the users wanted the system to continue to exist and to provide a forum for discussion. And other of the users, the high school boys, either didn&#x27;t care or were actively inimical. And the system provided no way for the former group to defend itself from the latter.<p>&gt; Now, this story has been written many times. It&#x27;s actually frustrating to see how many times it&#x27;s been written. You&#x27;d hope that at some point that someone would write it down, and they often do, but what then doesn&#x27;t happen is other people don&#x27;t read it.<p>----<p>I believe that the failing of Web 2.0 is that most people don&#x27;t care. User moderated content is a great thing - when its moderated. Without that moderation (which often falls disproportionately on a very, very, small group) you end up with doing tech support for people who are either asses to the world or intentionally trying to make your job suck in a very hostile way.<p>Back channels and trying to appeal to individuals doesn&#x27;t scale. The software needs to support the necessary tools of moderation (which include censorship and banning).
Magic mushroom chemical psilocybin could be key to treating depression – studies
All the papers in the issue in question:<p>&quot;Psilocybin for anxiety and depression in cancer care? Lessons from the past and prospects for the future&quot;, Nutt 2016 <a href="https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;s2l739f0gstsyzx&#x2F;2016-nutt.pdf" rel="nofollow">https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;s2l739f0gstsyzx&#x2F;2016-nutt.pdf</a><p>&quot;Psilocybin produces substantial and sustained decreases in depression and anxiety in patients with life-threatening cancer: A randomized double-blind trial&quot;, Griffiths et al 2016: <a href="https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;nxi3ix88jo67y88&#x2F;2016-griffiths.pdf" rel="nofollow">https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;nxi3ix88jo67y88&#x2F;2016-griffiths.pdf</a><p>&quot;Cancer patients often develop chronic, clinically significant symptoms of depression and anxiety. Previous studies suggest that psilocybin may decrease depression and anxiety in cancer patients. The effects of psilocybin were studied in 51 cancer patients with life-threatening diagnoses and symptoms of depression and&#x2F;or anxiety. This randomized, double-blind, cross-over trial investigated the effects of a very low (placebo-like) dose (1 or 3 mg&#x2F;70 kg) vs. a high dose (22 or 30 mg&#x2F;70 kg) of psilocybin administered in counterbalanced sequence with 5 weeks between sessions and a 6-month follow-up. Instructions to participants and staff minimized expectancy effects. Participants, staff, and community observers rated participant moods, attitudes, and behaviors throughout the study. High-dose psilocybin produced large decreases in clinician- and self-rated measures of depressed mood and anxiety, along with increases in quality of life, life meaning, and optimism, and decreases in death anxiety. At 6-month follow-up, these changes were sustained, with about 80% of participants continuing to show clinically significant decreases in depressed mood and anxiety. Participants attributed improvements in attitudes about life&#x2F;self, mood, relationships, and spirituality to the high-dose experience, with &gt;80% endorsing moderately or greater increased well-being&#x2F;life satisfaction. Community observer ratings showed corresponding changes. Mystical-type psilocybin experience on session day mediated the effect of psilocybin dose on therapeuti&quot;<p>&quot;Rapid and sustained symptom reduction following psilocybin treatment for anxiety and depression in patients with life-threatening cancer: a randomized controlled trial&quot;, Ross et al 2016: <a href="https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;eol3lp85pgbe8m3&#x2F;2016-ross.pdf" rel="nofollow">https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;eol3lp85pgbe8m3&#x2F;2016-ross.pdf</a><p>&quot;_Background_: Clinically significant anxiety and depression are common in patients with cancer, and are associated with poor psychiatric and medical outcomes. Historical and recent research suggests a role for psilocybin to treat cancer-related anxiety and depression. Methods: In this double-blind, placebo-controlled, crossover trial, 29 patients with cancer-related anxiety and depression were randomly assigned and received treatment with single-dose psilocybin (0.3 mg&#x2F;kg) or niacin, both in conjunction with psychotherapy. The primary outcomes were anxiety and depression assessed between groups prior to the crossover at 7 weeks. Results: Prior to the crossover, psilocybin produced immediate, substantial, and sustained improvements in anxiety and depression and led to decreases in cancer-related demoralization and hopelessness, improved spiritual wellbeing, and increased quality of life. At the 6.5-month followup, psilocybin was associated with enduring anxiolytic and anti-depressant effects (approximately 60–80% of participants continued with clinically significant reductions in depression or anxiety), sustained benefits in existential distress and quality of life, as well as improved attitudes towards death. The psilocybin-induced mystical experience mediated the therapeutic effect of psilocybin on anxiety and depression. Conclusions: In conjunction with psychotherapy, single moderate-dose psilocybin produced rapid, robust and enduring anxiolytic and anti-depressant effects in patients with cancer-related psychological distress.&quot;<p>&quot;Survey study of challenging experiences after ingesting psilocybin mushrooms: Acute and enduring positive and negative consequences&quot;, Carbonaro et al 2016 <a href="https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;z18mzgyke0j35b4&#x2F;2016-carbonaro.pdf" rel="nofollow">https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;z18mzgyke0j35b4&#x2F;2016-carbonaro.pdf</a><p>&quot;Acute and enduring adverse effects of psilocybin have been reported anecdotally, but have not been well characterized. For this study, 1993 individuals (mean age 30 yrs; 78% male) completed an online survey about their single most psychologically difficult or challenging experience (worst “bad trip”) after consuming psilocybin mushrooms. Thirty-nine percent rated it among the top five most challenging experiences of his&#x2F;her lifetime. Eleven percent put self or others at risk of physical harm; factors increasing the likelihood of risk included estimated dose, duration and difficulty of the experience, and absence of physical comfort and social support. Of the respondents, 2.6% behaved in a physically aggressive or violent manner and 2.7% received medical help. Of those whose experience occurred &gt;1 year before, 7.6% sought treatment for enduring psychological symptoms. Three cases appeared associated with onset of enduring psychotic symptoms and three cases with attempted suicide. Multiple regression analysis showed degree of difficulty was positively associated, and duration was negatively associated, with enduring increases in well-being. Difficulty of experience was positively associated with dose. Despite difficulties, 84% endorsed benefiting from the experience. The incidence of risky behavior or enduring psychological distress is extremely low when psilocybin is given in laboratory studies to screened, prepared, and supported participants.&quot;<p>&quot;The Challenging Experience Questionnaire: Characterization of challenging experiences with psilocybin mushrooms&quot;, Barrett et al 2016 <a href="https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;ndp11g9a038ofti&#x2F;2016-barrett.pdf" rel="nofollow">https:&#x2F;&#x2F;www.dropbox.com&#x2F;s&#x2F;ndp11g9a038ofti&#x2F;2016-barrett.pdf</a><p>&quot;Acute adverse psychological reactions to classic hallucinogens (“bad trips” or “challenging experiences”), while usually benign with proper screening, preparation, and support in controlled settings, remain a safety concern in uncontrolled settings (such as illicit use contexts). Anecdotal and case reports suggest potential adverse acute symptoms including affective (panic, depressed mood), cognitive (confusion, feelings of losing sanity), and somatic (nausea, heart palpitation) symptoms. Responses to items from several hallucinogen-sensitive questionnaires (Hallucinogen Rating Scale, the States of Consciousness Questionnaire, and the Five-Dimensional Altered States of Consciousness questionnaire) in an Internet survey of challenging experiences with the classic hallucinogen psilocybin were used to construct and validate a Challenging Experience Questionnaire. The stand-alone Challenging Experience Questionnaire was then validated in a separate sample. Seven Challenging Experience Questionnaire factors (grief, fear, death, insanity, isolation, physical distress, and paranoia) provide a phenomenological profile of challenging aspects of experiences with psilocybin. Factor scores were associated with difficulty, meaningfulness, spiritual significance, and change in well-being attributed to the challenging experiences. The factor structure did not differ based on gender or prior struggle with anxiety or depression. The Challenging Experience Questionnaire provides a basis for future investigation of predictors and outcomes of challenging experiences with classic hallucinogens.&quot;
Eaten by a Grue: podcast on Infocom games, text adventures and interactive fiction
The original Zork source code in MDL which is available here: <a href="http:&#x2F;&#x2F;retro.co.za&#x2F;adventure&#x2F;zork-mdl&#x2F;" rel="nofollow">http:&#x2F;&#x2F;retro.co.za&#x2F;adventure&#x2F;zork-mdl&#x2F;</a><p>It&#x27;s also here on github: <a href="https:&#x2F;&#x2F;github.com&#x2F;itafroma&#x2F;zork-mdl" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;itafroma&#x2F;zork-mdl</a><p>It is fascinating to read, and really beautiful code, quite understandable even if you don&#x27;t know MDL, and practically a form of literature.<p>I played the original Zork on MIT-DM and also the Infocom versions of course. Reading the source code is like seeing the behind-the-scenes underground rooms and passages at Disneyland!<p>While I was playing Zork, I found a bug. First some context: when you&#x27;re battling the troll, you can give things to him, and he eats them! Sometimes he drops his axe, and you can pick it up and kill him with it. He blocks the exits until you kill him.<p>So I tried &quot;give axe to troll,&quot; and he ate his own axe, then cowered in terror: &quot;The troll, disarmed, cowers in terror, pleading for his life in the guttural tongue of the trolls.&quot; Not satisfied with that, I tried &quot;give troll to troll&quot;, and he devoured himself: &quot;The troll, who is remarkably coordinated, catches the troll and not having the most discriminating tastes, gleefully eats it.&quot;<p>...Except that I still could not get out of the exit, because every time I tried, it said &quot;The troll fends you off with a menacing gesture.&quot;<p>I figured there must be a troll flag that wasn&#x27;t getting cleared when the troll devoured itself. And sure enough, I found it in the code, and it&#x27;s called &quot;TROLL-FLAG!-FLAG&quot;!<p>Here is an excerpt of the MDL troll code, where you can see the bug, where it should clear the troll flag when the troll devours itself, but doesn&#x27;t (well that&#x27;s how I would fix it!):<p><a href="https:&#x2F;&#x2F;github.com&#x2F;itafroma&#x2F;zork-mdl&#x2F;blob&#x2F;be079a4ed234071222e991d0da0f9e8c7f96d125&#x2F;act1.mud#L1337" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;itafroma&#x2F;zork-mdl&#x2F;blob&#x2F;be079a4ed234071222...</a><p><pre><code> &lt;COND (&lt;VERB? &quot;THROW&quot; &quot;GIVE&quot;&gt; &lt;COND (&lt;VERB? &quot;THROW&quot;&gt; &lt;TELL &quot;The troll, who is remarkably coordinated, catches the &quot; 1 &lt;ODESC2 &lt;PRSO&gt;&gt;&gt;) (&lt;TELL &quot;The troll, who is not overly proud, graciously accepts the gift&quot;&gt;)&gt; &lt;COND (&lt;==? &lt;PRSO&gt; &lt;SFIND-OBJ &quot;KNIFE&quot;&gt;&gt; &lt;TELL &quot;and being for the moment sated, throws it back. Fortunately, the troll has poor control, and the knife falls to the floor. He does not look pleased.&quot; ,LONG-TELL1&gt; &lt;TRO .T ,FIGHTBIT&gt;) (&lt;TELL &quot;and not having the most discriminating tastes, gleefully eats it.&quot;&gt; &lt;REMOVE-OBJECT &lt;PRSO&gt;&gt;)&gt;) (&lt;VERB? &quot;TAKE&quot; &quot;MOVE&quot;&gt; &lt;TELL &quot;The troll spits in your face, saying \&quot;Better luck next time.\&quot;&quot;&gt;) (&lt;VERB? &quot;MUNG&quot;&gt; &lt;TELL &quot;The troll laughs at your puny gesture.&quot;&gt;)&gt;) (&lt;AND ,TROLL-FLAG!-FLAG &lt;VERB? &quot;HELLO&quot;&gt;&gt; &lt;TELL &quot;Unfortunately, the troll can&#x27;t hear you.&quot;&gt;)&gt;&gt; </code></pre> The troll bugs are also in Zork I! Here&#x27;s a newspaper article from January 8 1985 about &quot;Infocom Glitches Bug Game Players&quot;!<p><a href="http:&#x2F;&#x2F;articles.sun-sentinel.com&#x2F;1985-01-18&#x2F;features&#x2F;8501020995_1_typing-troll-adventure-games" rel="nofollow">http:&#x2F;&#x2F;articles.sun-sentinel.com&#x2F;1985-01-18&#x2F;features&#x2F;8501020...</a><p>And here&#x27;s another article from March 29, 1985: &quot;Zork I Fuddles Mental Health With Choices&quot;:<p><a href="http:&#x2F;&#x2F;articles.sun-sentinel.com&#x2F;1985-03-29&#x2F;features&#x2F;8501120597_1_trap-door-computer-games-troll" rel="nofollow">http:&#x2F;&#x2F;articles.sun-sentinel.com&#x2F;1985-03-29&#x2F;features&#x2F;8501120...</a>
Ask HN: I'm depressed, what should I do?
As a guy who has also fallen from grace, several times, with heavy reality checks i would say that this is a normal reaction of coping in front of what is your first big reality check:<p>- The end of school for a guy who was &quot;one of the best&quot; and with it: the loss of an academic status (grades) that has little to no impact on the real world (&quot;genius&quot; excepted), the loss of a clear path to success (study hard and it is going to be fine), a set of guidelines (everybody is equal and graded on their knowledge).<p>- The first time somebody told you that not everyone will want to work with you. And with it the realisation that not everything works as you were led to believe, you are good at what you do, why would they let you go? If you were really that good why wouldn&#x27;t they keep you and do everything they could to keep you?<p>- Go back to father and mother, bearing this first failure, and feeling they do not completely understand.<p>Of course you might not think of it in those terms, but it is the reality most people, even very bright people, face after college. And in my opinion there is a great lot to be grieving about. Those are just the first realisations that will come to pass in your life about how the world is brutal and do not follow any rules you think it should follow.<p>You also are at a disadvantage, because you did good in school, you never had to defend yourself despite your own self-image, to fend for yourself and see the good in yourself despite the flaws. Less successful students will have learn this by now, and they are prepared to fight for their share, to promote their knowledge, to put forth their projects with affected self-confidence. That is a skillset you need to learn right now. In my opinion there is a positive way of learning it, and a very bad way of learning it. And it all comes down to how you will frame this reality, the negative way will frame this need to fight and sell yourself as a cheat, a crack in the logic, something that would make sense only for less gifted people. This model, in my opinion, couldn&#x27;t be farther from the truth, it is the way of the academic and technocratic ego, that would only work for you if you were truly narcissistic. You need to let go of what was given to you in terms of status and make it your duty to build it for yourself. You should be the one creating your own status, not others.<p>As a Psychiatrist I would say: Since depression should occur without clear external factor, this is not a major depressive disorder, nor a minor episode, in my opinion it is a form a grief, a syndrome now called bereavement.<p>Of course you will see grief generally associated to the death of someone, but rarely in modern psychiatry will you see it come up when it is something inside you that died. You will see that kind of concepts, such as narcissistic collapse, in psychoanalysis, psychodynamics or other more holistic psychopathological framework. But maybe HN is not the place to go on about those.<p>When I am reading you, and I am not the only one to notice here in the comments, I can see a certain dynamism hidden below the apparent depression, you seek and imagine a future. A positive future in accord to your previous expectations. This is largely in favor of grief. Indeed, the main difference between grief and major depressive disorder is this fluidity, this motion of affects and thoughts that are able to go beyond the current state into a more positive future. You are also actively trying to find solutions for yourself, trying to find limits to your symptoms, they are submerging you, you can think about them, and think about yourself relatively clearly, that is another strong cue that you will get over this, and end up stronger than you were before.<p>The 5 stages of grief are something we use in practice but it rarely has this exact form of denial, anger, bargain, depression, acceptance. Usually you go through fast cycles of anger, despair, fatigue, hope, discouragement. Mostly discouragement and fatigue.<p>My advice will echo other people&#x27;s comments, you should set up a healthy daily routine, which seems fairly simple but always holds in itself the secret to a fast recovery. Set even the smallest of positive healthy goal. That simple act is in itself the fastest path to resilience by changing how you frame your relationship with your mental state.<p>- Most important is sleep. Do sleep at regular hours, before midnight, without your computer in bed, wake up early after your natural number of hours of sleep. If you think you need more than 8 hours, try sleeping at 11pm with blinds slightly open to let the morning light wake you before your alarm.<p>- Do sports, even if only a little yogi sun salute or couple pushups in the morning (cardio would be best but hard especially in the winter !)<p>- Schedule todos and positive habits: download an app like Habitica or something, and plan your days.<p>- If you are not doing anything of your days, either start a side project, enroll in a class, or do something creative like cooking for example, or music.<p>- Read 15 pages per day minimum of non tech-related book, maybe philosophy, or a author you like.<p>- And set a mental date as to when you will stop looking and accept one of the offer. Settling for an offer that you are not 100% sure about, do not settle for one that would make your daily life a nightmare, like going someplace you know nobody and&#x2F;or commute 2hours a days. Find someplace life is easy, so you can enjoy your daily life.<p>- Do see your friends, be honest with them, don&#x27;t hide, have a drink, play some videogames, catch a movie.<p>- Take a real good care of your environment, make it something sweet, kind, light, fun. Cherish friends, and family when they are a positive force in your life, do not linger to long otherwise. A positive environment, is 90% of the work for a happy life.<p>On a more general note, there is usually no cure to mental illness, but the realisation that there is no cure except your own will to understand yourself, cope and do the best with what you have. Then the disease usually slowly disappears, controlled out of sight. I do not mean it in a bad way, taking back the reign of your own sanity is the real cure, this realisation do not come easy, nor is it possible for every type of illness, or every environment. This is my opinion, from having worked in the field for some years now, having also my own issues and close friends suffering from all kind of mental problems.<p>That&#x27;s it for my two cents !<p>I might be mistaken but I honestly believe you&#x27;ll be fine, and do not think you will drag this all your life. This will make you stronger, without any doubt in my mind.
Why I'm Making Python 2.8
Just a couple of things I want to point out here.<p>1. Calling it Python 2.8 is a really bad idea. If you want to fork Python in this way, great. I&#x27;m sure very few people in the community would have a problem with it if you called it Brothon or P8thon or Hackthon (could run into trouble with that one, but who knows?) or IHate3.xThon. You can call it LifeOfBrython or Snake, depending on how much you care about where the name came from.<p>Guido van Rossum is one of the least litigious people you can find in the open source community. When he wanted people to stop naming packages after PEPs (pep8, pep257), he didn&#x27;t try to go to court and figure it out later. He talked to the maintainers of the packages and explained why he thought it was problematic [0]. I wouldn&#x27;t expect a cease and desist showing up anytime soon, but it&#x27;s in poor taste, and I hope you reconsider it. Python is not yours just because you are free to use it and do as you please with it. It&#x27;s really stretching the concept of open source for you to unilaterally declare a collection of hacks to be a point release that&#x27;s been specifically addressed and decided against by the community as a whole.<p>2. I have so much sympathy for people who must maintain Python 2.x applications and cannot make the business case for putting the time into porting it to Python 3. I&#x27;m in that situation myself right now, and I have been for years. I can&#x27;t find a compelling argument to warrant the time so long as the 2.7.x branch is getting security updates. When the security updates stop, the move will finish.<p>The reality is that those of us who have been working with Python for a long time now (10+ years) have already figured out reliable workflows to get around the most lacking aspects of the language in our domains of expertise. Or we have swapped out subsystems in other languages for cases where we absolutely cannot find workarounds.<p>I get it. I&#x27;m in that boat. I understand that boat because I live in it.<p>I can even understand why this project happened. Because hacking things you love to make them better for you as an individual is the heart and soul of what makes software development so great.<p>What I absolutely cannot understand is why anyone (absent certain libraries needed) would start a new project on a deprecated branch of a language. And I don&#x27;t understand why this is such a contentious issue specifically with Python. I don&#x27;t hear any of my friends who work with C# or Go getting into arguments about how the latest version has failed or shouldn&#x27;t have been rolled out. As much as Swift 3 has caused problems, I don&#x27;t hear any of my iOS developer friends complaining about how that shouldn&#x27;t have happened and trying to backport the good stuff into the 2.x branch. I genuinely don&#x27;t understand this behavior in the Python community.<p><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i>*<p>I do, however, suspect that it has something to do with the overall age of the community.<p>Using Python has always been a matter of taste and style. I get that. Python is a language that makes explicit tradeoffs between performance and style. It is no shock to me that people who learned to love the language in its older form want it to stay that way. And I say this as someone who is closer to 40 than I am 30: we older folks in the industry need to fight against the stereotype that we can&#x27;t or won&#x27;t learn new tricks. Yes, there&#x27;s a place for battle scars and pushing back against every new flavor-of-the-month stack that rehashes old problems in new ways. But obstinately sticking with stuff simply because it&#x27;s what we know does all of us who are getting on in years a huge disservice.<p>Getting a good gig as a software engineer in your 20s is not that difficult. Getting the same gig when you&#x27;re pushing 40 and the hiring managers are still in their 20s is a lot harder. If you&#x27;re making decisions to start new projects on old versions of <i>any</i> language, please, please, please, do all of us a favor: make sure that you are doing it for solid reasons and not simply because it&#x27;s what you already know and are already comfortable with.<p><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i><i></i>*<p>Because this post isn&#x27;t long enough already, I have a second pet theory about the reluctant adopters.<p>Python 3 is a _great_ language. It&#x27;s not the same as Python 2. It makes different tradeoffs about styles and choices of expression from what Python 2 does because these things can and should change over time.<p>I think that the expressive power of Python, the closeness to human language, and the ease of reading well-formed code have a lot to do with the resistance to Python 3. There are people (I&#x27;m one of them) who get really really picky about language in general. I think the Oxford comma is a necessity in almost every case. And I think people who casually omit it are stupid, shallow, non-thinking, drones who don&#x27;t care about the history of language and don&#x27;t care about precise meaning of words, and therefore don&#x27;t care about me because they an&#x27;t be arsed to toss a comma in a place that greatly clarifies meaning.<p>I don&#x27;t actually think all of that, but I&#x27;m closer to that than I am to not caring at all.<p>And there are many like me. When you&#x27;re dealing with a programming language with intent at its heart, people are going to take that in different ways. When you&#x27;re dealing with a language that cares about whitespace and eschews braces and wants to limit brackets and just expose the pure logic of the program, people really are going to get fired up about print vs. print().<p>That&#x27;s expected. And beautiful that so many people care that much. But as much as I lean towards prescriptivism and that rules are good for a language, even I have to admit that language evolves. But it usually evolves to be more inclusive and more expressive, not less. The really bad fights about languages in general revolve around what to include, not what to exclude. Because languages are exclusive by default.<p>A counterpoint to my idea above that we are all just old and lazy, is this: that we really do genuinely care, and that we care for good reasons. But something has to give. We cannot refute the evolutionary pressure. Python 3 is as necessary for modern speakers of programs as a recent edition of a dictionary is in your preferred language.<p>Yes, you can get by with something older, and you can even make yourself understood by most people who speak the language.<p>But you are limiting your ability to express your intent when you make an intentional choice to refuse to adopt what the rest of the culture around you is doing.<p>Aaaaaaand I&#x27;m done. Sorry for how long that was. I didn&#x27;t intend that. It just sort of happened. If you read all the way to the end, let me know, and I&#x27;ll upvote you.<p>[0]: <a href="https:&#x2F;&#x2F;github.com&#x2F;PyCQA&#x2F;pycodestyle&#x2F;issues&#x2F;466" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;PyCQA&#x2F;pycodestyle&#x2F;issues&#x2F;466</a>
Original Spec for Lotus Notes (1984) [pdf]
Oh boy, Notes. I worked at IBM last year. I had, in turns, a frustration, a revulsion, and finally a great admiration for Lotus Notes. Not for any practical reasons—the implementation leaves much to be desired. But in the abstract? It&#x27;s kind of cool.<p>See, Lotus Notes is essentially (the modern conception of a) web browser: an MDI navigation chrome, plus rendering engine, plus VM runtime. Each time you open a Notes &quot;document&quot; (like an HTML file), the code embedded in it is run through the VM to create a DOM, which is rendered by the rendering engine and displayed in the window&#x2F;tab representing the document. One of the common DOM element types is a text link, and Notes documents frequently link to other .nsf files, which then are downloaded and which then open in Notes in new windows&#x2F;tabs.<p>The one crucial thing that Notes has over web-browsers, though, that made all the difference in how the two ended up evolving, is that in Notes, each Notes document has a <i>remote</i> database associated with it, that <i>the document</i> can read from and write to.<p>It&#x27;s a bit like a site or mobile app that uses Parse or Firebase: there&#x27;s no need to write backend logic; you just write your client, and then point it at a <i>generic</i> app backing-store server. In this case, the backing-store server is called Lotus Domino.<p>Like Parse or Firebase, Domino handles authenticating clients. For each Domino server a Notes client is signed into, they have an &quot;identity file&quot; (PGP keypair) the server recognizes as &quot;them&quot;, and which their Notes client uses to sign any documents it authors (where an update sent by a document to a Domino server is sent as a small signed document.)<p>This is in a lot of ways similar to how browsers use TLS client certificates, but lighter-weight, in a similar way to how e.g. an Apt repository&#x27;s pre-signed packages are lighter-weight than HTTPS. In TLS, a piece of data will lose its origin once it passes out the other end of a TLS tunnel, and so the server must make a metadata record of who it was talking to, and vouch for the data <i>itself</i> (with another TLS tunnel) when someone else requests it. With pre-signed documents, the server can be a dumb store-and-forward server, and the documents will always just &quot;stay&quot; signed by whoever first created them, without needing un-wrapping and re-wrapping on every transmission.<p>And <i>that</i> means that, unlike Parse or Firebase where linearization is done on the server, a Notes document can just download all the store-and-fowarded update message-documents representing its database, and then linearize them itself, using the Notes client&#x27;s own configured trust settings to decide what operations in the event stream were authorized changes, and then the document&#x27;s own merge policies to decide how to linearize the data (i.e. what fields are CRDTs, what fields are last-write-wins, etc.)<p>Then, finally, you can understand what is going on when Lotus Notes opens up and shows you what looks to be an email client: it&#x27;s a Notes document (like a web-app) syncing down a Domino database full of signed update-messages from other Notes clients (one of which can be an SMTP gateway server, allowing messages to get pushed in from outside the Notes system.) The email client document <i>chooses</i> to represent its (considered-authorized) update-messages as individual email messages in a list—but some of them are also other things, like e.g. edits to previously-sent messages.<p>Each signed update-message might just contain a plaintext message, or it might effectively be a publish-event pointing the Notes client at a Notes document. The email-client document is responsible for deciding how to render the plaintext messages when you focus those; but if you focus a message representing a reference to a Notes document, it just downloads&#x2F;syncs that Notes document into your local database and then the preview pane displays it in the Notes equivalent of an &lt;iframe&gt;, allowing it to run all its own code.<p>So, you could picture the default Lotus Notes email client as being less like Gmail, and more like Slack: it syncs a history of update-messages, some of which are modification-events for real &quot;messages&quot;. Like Slack, you can upload files &quot;to the service&quot; and then just send references to them to other users in your team. And some of these files could be small, self-contained HTML5 apps, talking to a service like Firebase.<p>The key differences, then, are that 1. you actually interact with such documents <i>within</i> the client (so, picture if you could post HTML documents into Slack that would be displayed <i>as an &lt;iframe&gt;</i>); and 2. the messaging service itself is hosting the Firebase-alike functionality, such that every &quot;app&quot; built with the Firebase-alike functionality gets an implicit User model mapped to the user of the messaging app.<p>When described like that, it actually sounds a bit <i>less</i> braindead than Google Wave, doesn&#x27;t it?
How to run a meeting (1976)
interesting that this is from 1976 and still applies to 2016, but does it apply to startup companies?<p>here are my takes on meetings in general, some correlated to the article&#x27;s sentiments:<p>1. avoid meetings when possible, avoid them like the plague. as a meeting organizer, you need to be very clear about your objective&#x2F;goal for the meeting, so don&#x27;t be too liberal with other people&#x27;s time, but this also means when being asked to attend a meeting, cancelling or rejecting the ones you deem to be useless. learn to just say &quot;no&quot;. as an engineer i feel this sometimes lowers your favorability in the eyes of managers or peers, but this sets the right culture and tone, if you&#x27;re in a company where you attend a lot of useless meetings, i feel for you.<p>2. avoid meetings when possible<p>3. avoid meetings when possible<p>4. for decision making meetings, limit the number, preferably &lt;= 3, the more people you add, the more opinions you have to filter which sets things back. lots of my key decisions are done in private with one person. ever go into that meeting with 11 people in the room, all ready to say something for the sake of saying something? run, run as fast as you can!<p>5. always list actions triggered from the meeting and follow up on them adamantly with owners assigned before leaving the room. the meetings where there&#x27;s a lot of talk, but then everyone leaves without clear ownership are a waste of time.<p>6. for developers, it&#x27;s important to recognize that they maybe in &quot;the zone&quot;, so if i must have a meeting with developers, i try to organize this during the beginning or end of the day, meetings during the middle of day tend to break them out of the zone, then they have to context switch back to that deep problem that they were thinking about which would be a huge productivity fail on everyone&#x27;s part.<p>7. keep things concise, we&#x27;re not here to small talk about families or the warriors, do that at the water cooler. some people use this as an ice breaker to relax the mood, but that is just potentially a cover up for some big shit storm about to happen.<p>8. for the meetings where you&#x27;re trying to pass down information, keep it concise again, ok to reiterate key messaging, believe the shit that you&#x27;re saying, have conviction.<p>9. keep track of time, i hate all the assholes that overrun meeting times, i tend to attend these meetings less and less, if they can&#x27;t prove that they can hold effective meetings then you lose my time.<p>10. know thy audience, what messaging do you want to give, what messaging do you hope they digest, and tailor it, don&#x27;t talk about stuff that 80% of the people don&#x27;t care about, you&#x27;re wasting people&#x27;s time.<p>11. the good ol&#x27; status meeting, everyone and their mother attends to get a feel for what others are working on, but has absolutely no pertinence to what i need to get done or have done. really keep things high level, this is not your chance to voice your opinion, or give people the illusion that you&#x27;re busy. just talk about the high level points, if you have stuff to resolve, don&#x27;t do it in the meeting, do it offline, ahead of the meeting.<p>12. prepare well for meetings, i used to think that i could just waltz in and improvise, no, you need to prepare well, if you have a 1h meeting with 5 people, that&#x27;s 6h of company time being spent, almost a full person day spent. you better be ready and you better get to the point.<p>13. be on time, every minute wasted is amplified by the number of people waiting. i usually issue punishment for the ones that come in late, sometimes just the latest, sometimes everyone who&#x27;s late, buy coffee, do pushups, whatever it is.<p>14. i have a no phone and laptop policy in my meetings, sure you could be one of those new fangled flower power children that like to take notes on ipad&#x2F;surface, or evernote on your laptop, but don&#x27;t do that. you should, however, bring in a paper notebook. i know you have photographic memory, but bring that notebook, means you&#x27;re well prepared and expecting something out of the meeting. i had a friend that brought his laptop to play nba live to his harvard law school class. i also had this senior director during a 3 on 1 interview doodle penises on his laptop while the candidate was talking. there&#x27;s potential for a lot of mistrust in these circumstances. i think for the meeting owner to project onto a screen is obviously fine, but there&#x27;s nothing concealed. assholes that answer phone calls or email during my meetings, unless you&#x27;re sre&#x2F;devops, should be banned from meetings. the goal should be to get out of the meeting as quickly as possible, everyone focused, if you cannot focus then things will drag on.<p>15. avoid meetings if possible...
Brain, Mind, Body and the Disease of Addiction
&gt; The slogan that addiction is a chronic disease of the brain is meant to put the addict beyond the reach of moral reproach — and addiction itself squarely into the domain of the medical rather than the moral.<p>&gt; Now, I agree that addicts should be treated with love and compassion instead of judgment and punishment. But what does it say about us if the only way we can muster compassion and love for those among us with substance abuse problems is by suggesting that they are solely bystanders unjustly afflicted by mechanisms in their brains?<p>This is a recurring problem that I think is only going to get worse once we get more knowledgeable about many things. Genetics, mental processes, ethics.<p>I believe it&#x27;s tightly coupled to the idea of total free will, and as long as that idea is the dominant philosophy of a society, it cannot have compassion for these people as it perceives them as acting freely. This leads to the just world hypothesis. We feel compassionate about injustice; negative outcomes coming from free will are, by definition, not unjust. Among those whose desire to have compassion is great enough, they have to say that the people in question are dealing with a physical problem, as physical causes are the only things that are perceived as capable of encroaching on total free will.<p>In reality, there is no problem. A person may have a mental model that leads them to addiction. That mental model, most likely, makes sense to them and in their circumstance. Note that mental problems are not fully logical - we&#x27;re not solving the framing problem. Genetics and environment are pieces here. Given this model, they choose addiction, because in their model, it makes sense. It turns out (we think, md224 has some interesting comments on that part of the equation) that their model is somewhat faulty, and now in addition to having a faulty model and knowing that it&#x27;s faulty somewhere, they have the expected effects of physical addiction to deal with. Just reading this paragraph shouldn&#x27;t lead anyone to believe that it will be straightforward for them to suddenly develop a perfect mental model to deal with the addiction from this situation, even if such a thing is possible. The person has made a choice, but it doesn&#x27;t really matter that they did, and the choices they continue to make, may, again, not matter. There&#x27;s nothing magical about choices, they&#x27;re still limited by one&#x27;s mental model and therefore will never be truly free. They may find a way through to a good model. Or maybe some good people will help them make their way there. Or perhaps a physical intervention will accomplish the same even faster.<p>Feeling compassion towards someone trying to navigate this complex world? Easy. Just get rid of the idea that we&#x27;re all sitting in front of two nicely presented plates where one clearly has good choices on it and the other clearly has bad choices on it. That&#x27;s not how it works.<p>But we can&#x27;t think like this. The philosophy of total free will does not allow it. A person either has free will or they don&#x27;t. Therefore the plates must clearly be there, and any good person will choose things from the right plate. Therefore, any person choosing addiction is fully aware of the outcome and expects exactly the same thing that we expect, so if it happens to them, we get to blame them and put the <i>full</i> weight of the responsibility that free will implies upon them. And from there comes pain.<p>The terms we use when we talk about agency: attitude, personal responsibility, consequences. These terms became steeped in guilt and shame and authority. Because that is the price we put on agency: if you want to have it, you must feel constant guilt and shame, over any decision you make that <i>we</i> didn&#x27;t want you to make that has a negative outcome. These are not light emotions, and especially those raised to be good and conscientious will be rather vulnerable to them. These inform of one&#x27;s status and worthiness and acceptance in the tribe. To feel constant guilt is to not just make mistakes, but to also be wrong fundamentally, as we know one must not feel it constantly. What is one to think when they are convinced by others that they, completely freely, constantly make bad choices? That&#x27;s a hell of a cross to bear.<p>Little surprise is it, then, that many people decide that they don&#x27;t want any of that agency if it comes at the price of constant pain and living life at the lead of someone else. We want to avoid pain, not experience it. They either deny the choices themselves, or deny that they made them, or deny that the choices existed. Anything else leads to pain. They are not allowed to merely banish guilt and shame and reclaim their agency due to the philosophy of total free will, so they just banish all of their agency all together.
Uber's predatory pricing is undermining public transit and density
In my city, rents have skyrocketed near transit stops since it&#x27;s so essential. When you&#x27;re a 20-25min walk from a transit stop, prices are a lot cheaper, but you become hard to get to and hard to get to work. New transit isn&#x27;t really an option because of density and cost (the cost to build per rider is astronomical). But the city is also dead set against increasing the density near transit stops to decrease rent&#x2F;buy prices (and increase transit ridership due to it being convenient).<p>Public transit in many cities creates hot-spots in the real estate market that isn&#x27;t good public policy either.<p>Of course, the article is right that people shifting into less dense transit will have bad environmental and congestion problems.<p>But I don&#x27;t think that traditional public transit will be the way of the future. Rather, I think that self-driving, reasonably high-density vehicles will be the future. Imagine a nice bus that seats 15-20 picking people up along an ad-hoc route in the morning determined as riders hail the bus and are instructed to an ad-hoc stop within a block and dropping them off within a block of their destination. That&#x27;s a lot more convenient than most public transit systems where you have to travel to stops, maybe change lines, not getting exactly where you want to go, etc. It could also cut down on vehicle miles travelled by creating optimized routes.<p>If Uber Pool can do what bus service can do for barely more money, a self-driving bus will be <i>way</i> better than a standard public transit experience and as efficient or more efficient environmentally.<p>In fact, I think the self-driving future in cities will be determined by good incentives. During peak periods, charge for congestion. Not broad-based attacks on vehicles, but an incentive for people to commute in higher-density vehicles where the charge can be spread among more people. It would be easy for a city to incentive Uber, Lyft, and others to offer higher-density options for commuters via congestion charging. Likewise, environmental incentives could be offered to push customers and companies toward more economical vehicles and routes. I think it&#x27;s reasonable to assume that in a self-driving future, companies like Lyft and Uber would want a lot of economical vehicles like Priuses getting 50MPG in the city. For higher-density vehicles, 10% fuel savings could push margins up a couple points - especially if environmental fuel taxes are put on top of the price of fuel. Similarly, better routing can lead to fewer miles travelled leading to savings.<p>For those that want the privacy of single-person travel, they can be charged an appropriate amount to compensate society.<p>Uber can&#x27;t do a lot of high-density vehicles currently because it relies on vehicles owned by random people. But when self-driving vehicles truly become mainstream, there&#x27;s no reason Uber wouldn&#x27;t want to expand into company-owned, higher-density vehicles. They could run these at a fraction of the cost that most public transit systems are running at. In lower density areas, maybe medium-density vehicles and in even lower density areas, single-person rides in small vehicles may remain common. When Uber can control its vehicle stock with self-driving vehicles, there&#x27;s a lot of options for them to optimize in ways that will boost their profits while also helping the environment and congestion.<p>Maybe you think Uber isn&#x27;t interested in a low-rent, non-premium service. That may be, but so many are interested in transit and it would be reasonably easy for a competitor to put together such a service and undercut Uber on price for so many riders. Uber would want to respond.<p>Ultimately, the article talks about bus routes doing 10 boardings per hour and how that&#x27;s more than an Uber will do. That&#x27;s probably true, but an Uber-bus would likely do more boardings due to better ad-hoc routes and more convenience. In my city, fares only cover a quarter of bus operating costs (never mind capital costs) and two-thirds of subway costs. Part of the problem is that a lot of transit systems work off the principle that they need to serve off-peak and lower utility uses in order to hit that critical mass that would make them a good choice for users. Ad-hoc, self-driving routes could relieve transit systems of their bigger loss-leaders using vehicles optimized for those areas. Similarly, off-peak service that often sees low ridership and loses money could be off-loaded. This is also an environmental win - subways are environmentally friendly when there&#x27;s a lot of riders, not when they&#x27;re mostly empty. A bus route that&#x27;s losing over $10 per rider is bad for a public transit system and also bad for the environment since the bus probably doesn&#x27;t have enough people on it to make it fuel efficient on a per-passenger basis.<p>I think there&#x27;s a genuine opportunity to do a lot better than current public transit with self-driving vehicles. Something that&#x27;s a lot more environmentally friendly and a lot more convenient.
Management theory is becoming a compendium of dead ideas
As someone who is a respected manager, according to anonymous surveys my managed peers take every year (and they are truly anonymous) as well as what I learned from the manager I came to most respect in my life and still go to to this day even though he isn&#x27;t any more, I think what makes good management particularly effective is a few things. Before I list them out and explain though, I just want to add one caveat: This isn&#x27;t applicable for everyone, and certainly not every industry, and everyone&#x27;s situation is different and it can be hard to relay some idea&#x27;s perfectly in text so I&#x27;m going to do my best.<p>With all that said, here&#x27;s a little background as well. I manage a team of folks who are in charge of IT infrastructure and deployment. Mobile devices, servers, VPNs, virtualization setups, as well as database administration and some non-customer facing coding to keep everything going, all the way down to the desktop setups for employees. I work with maybe 30 people underneath me and there can&#x27;t be more than 100 of us all together. Each of my 30 team members are tasked with different tings in accordance to the rough outline i speak to above.<p>Now, to the good bits<p>1) If i learned anything, from being the manager to before that, its a really really simple thing. Don&#x27;t ever forget where you came from. Ever. I often will think about making a decision - some big, some small - and I remind myself &#x27;What would my reaction be if i wasn&#x27;t a manager, but underneath me? How would I react to this? Positive? Negative? Why?&#x27; I find that my best manager did this all the time, and it really showed because he was one of us before he was a manager (for my company this is typical, we don&#x27;t get a lot of outside management for our group that hasn&#x27;t at least had some experience on the basics of what we do). The reason for this is to me obvious, in that you will better understand the actual core of your decisions affects this way. Its easy to forget all this in the day to day, but its a huge one.<p>Specifically, it has benefited in that some changes that came down from those who manage me, were immediately rejected because I was able to articulate, in a way they could understand, why the new change would be bad. For instance, they wanted to take away our hands-on Lab for testing new technologies (this is a good part of what we do, to keep up on things) and go to a more virtual one. In theory this seems okay, there&#x27;s a lot of companies that do this (Cisco, itpro.tv, VMware, all have these kinda things) but I made the case that no, have it hands on, with good training attached to the hands on labs, was a more effective on the whole, and it cost less, because we could dedicate a rotation of people to learn something, teach it to the team, and then the team gets to test it, and they rotate into to teaching back, until we feel the technology is well covered to at least a &#x27;intermediate&#x27; level for all members on our team. With the virtualization, this was lost, and they had to pay more for the licenses. It ended up being better for us because we could just get labs setup, get documentation&#x2F;official manuals&#x2F;training material, learn it, and teach it over the course of x weeks to everyone else, and they could then come in to the lab when there was allotted time, break stuff, fix stuff etc. and round it went.<p>2) One of the best things I ever learned as well, is that if someone or someone&#x27;s is designated as a point of contact, they should be treated as such for a project. For instance, if I designated Steve the networking guy as my point of contact for network related projects that I need someone to oversee, I essentially report to him, with some exceptions, instead of him reporting to me about the project. This gives them freedom, and gets them into a position where they can also learn good managing skills or at least, focus on big picture things for awhile. I don&#x27;t micro manage, I philosophically meet my team half way. If things that we have as objectives are being met, and are exceeding expectations, then i give more leeway. I will always stick up for them if they in turn do good work, and the more good work that is produced, the more freedom I&#x27;m willing to allow. This has created a huge win for the company as well, as our team is small, but incredibly productive.<p>3) Don&#x27;t try to sanitize feedback. This isn&#x27;t a &#x27;be a jerk&#x27; card, but team members who want to improve honestly like feedback, and give it to them honestly. Good and bad. One thing old managers i know used to do is never talk about the things I did well - to reinforce those things, is the purpose - but only talk about how we could improve. My best manager, and a skill i keep as well, is whenever I do check in with my team in a 1 on 1 way, we talk about what they&#x27;re doing that is amazing and great, and then we talk about area&#x27;s of imrovement, and relate it back to &#x27;so these are your core strengths, how can you apply what makes these your core strengths to these problems?&quot; that really gives people a lot of motivation in my experience<p>4) If you make a commitment to someone, meet it, obviously huge unexpected things aside that you can&#x27;t plan for, this generally is huge. Not forgetting say, asking upper management for x thing on their behalf (our company works this way a lot, though we&#x27;re trying to change that). or even just follow up when you say you will. If you do miss it, explain why, honestly, and then go from there<p>5) Training is important. invest in your team, your team invests in you, and that makes for a great company to work for, have as a client, and essentially helps you grow. It took our company some time to realize this, but now they&#x27;re full on.<p>6) Rotate their roles, but don&#x27;t push overly hard for people to do what they don&#x27;t like unless there really is a specific reason. and I&#x27;m not talking about &#x27; I don&#x27;t like putting notes together detailing these implementations&#x27;. There are musts in every job, and enforce those across the board. I&#x27;m talking more &#x27;okay, so, you been doing the networking for like a year, year and half, are you okay, burned out, want to try something new?&#x27; this lets people pick up new skills, maintain those skills, and apply their previous work experience on your team in different ways. We&#x27;re team &#x27;swiss army&#x27; internally for this reason :) and my headcount is smaller than the next guys, but we&#x27;re always rated one of the top teams. I think this has a lot to do with it. (out of 4 teams, granted)<p>7) you can be of great value as a manager if you learn to filter what is actually nessacary and what isn&#x27;t from upper management before talking to your team about their goals&#x2F;expectations. See my example on the virtual labs. My team didn&#x27;t know that was even a consideration until another manager in the same meeting mentioned it to their team. by that time, it was dead, and my team was grateful they didn&#x27;t have to waste time thinking about it.<p>8) If someone is bad hire, and your team isn&#x27;t working because of this, don&#x27;t be afraid to do something about it. you&#x27;re a manager, sometimes doing hard things like firing someone or moving them into a different position they&#x27;re more skilled for is something you need to recongize early. this goes along with my next point....<p>9) if your team members come to you saying somehting about how they are being treated by others, take it at face value, look into it, and have their back. Always. Always. Always. Never dismiss this. I had an incident where someone was being discriminated against based on their sex by an older team mate who i think just had it in their head from a different time what the role of that person was and wasn&#x27;t, and I had recently assigned them as a project lead. I didn&#x27;t tell them to deal with it, I looked into it, asked a trusted colleague to talk to some folks about it (not revealing the nature, just said &#x27;Hey man, could you do me a favor? coud you ask around and get a feel for what people are getting a vibe about person x? I want some imparitialness to this&#x27; Idk if that violates policy in HR or not, but for me it worked well, i squashed it quickly and made it VERY clear what isn&#x27;t tolerated on my team in a short amount of time, but it also came down to the person who was saying things that were considred sexist didn&#x27;t realize what they had going on was just a reflection of insecurities. Sometimes you&#x27;re being the psychologist therapist and working them through those feelings so they can understand what happened. I didn&#x27;t end up having to separate them or fire anyone in this case, because that person came to an understanding and the person in charge separately was okay working from a clean slate as long as it didn&#x27;t continue, once they understand what triggered what. Now they both work together extremely well and are both happy.<p>Anyways, that&#x27;s my experience. I&#x27;m no lawyer, no MBA, and your mileage may vary, esp. on that last bit, but these are some representations and guidelines i follow often.
Facebook – Private Image – No Authentication Required to View
To be honest with you, I&#x27;m not even sure where to start with your rant. I&#x27;m normally pretty open minded on HN but this is just frustrating to read.<p>They refused to acknowledge this as a security risk because it&#x27;s not a security risk. In fact, they specifically call this particular topic out on their Whitehat page under the <i>Ineligible Reports and False Positives</i> heading. They have called attention to this topic as ineligible for a bounty for <i>at least</i> the last two years on that page, because 1) it&#x27;s not indicative of an application security failure and 2) they receive a vast number of invalid reports about it.<p>Let&#x27;s talk about &quot;security through obscurity&quot;, a term that has become a hand-wavy way of saying, &quot;I don&#x27;t really agree with how they&#x27;re doing this because it seems theoretically imperfect, but I can&#x27;t really explain why.&quot; A series of strings in a URL that are securely generated and infeasible to brute-force do not constitute actual security through obscurity. Security through obscurity is setting SSH to a port other than 22 (and let&#x27;s be clear - security through obscurity is not a bad thing when it&#x27;s coupled with best practices). A vast foundation of information security as a discipline relies upon the security of things that are infeasible for a computer to brute-force. Cryptography is a science predicated on the idea that making things extremely work-intensive is an acceptable substitute for information-theoretic security, which is generally unreasonable or redundant in practice.<p>Each portion of a Facebook CDN image URL is designed with an access and authorization policy in mind. The fbid URL parameter <i>alone</i> is a 64-bit number, and that is only <i>one</i> of the parameters you would need to successfully brute-force in order to correctly identify a valid image URL. How long do you suppose it would take your computer (hell, let&#x27;s give you 100, each with different IP addresses) to process 9,223,372,036,854,775,807 requests for a particular image URL on the Facebook CDN? But wait, let&#x27;s not forget, this is not offline cracking - you are making requests to Facebook&#x27;s servers, where they are free to rate-limit you at the slightest sign of suspicious or abusive activity.<p>Now let&#x27;s talk about this model of security at a higher level. Facebook uses a very robust ACL to discriminate between users that are and are not authorized to view a specific resource on facebook.com, before the user can be sent the CDN link. This ACL guarantees that parameters such as the fbid need to be correct before the CDN link will be granted, at which point those parameters are &quot;inherited&quot; by the CDN URL. The CDN uses a capability model, not unlike Imgur&#x27;s &quot;Private&quot; albums or Google&#x27;s published, non-public Docs links. If someone has a direct link to the resource, the URL parameters inherent to that link almost certainly suggest that the user is authorized to view it, because it acts as a gate before they could have been redirected to the CDN in the first place. The only other options are 1) a user brute-forced an image URL (good luck!) or 2) someone who was authorized passed the link on to someone who was not authorized.<p>Seeing that case #2 is the only realistic risk for an image URL compromise, let&#x27;s continue with that. Is case #2 a problem? From an application security perspective, no, because it doesn&#x27;t constitute a technical software vulnerability. From a risk assessment perspective, maybe? I agree with Facebook in that if you trust someone with a URL, you implicitly trust them to be prudent in passing that along (at least, in the context of Facebook). But this is a good juncture to talk about risk as a dimension of information security&#x27;s business process. The context of this discussion is not a military staging server, nor is it even a HIPAA compliant database. This is a CDN for people to store photos on that are <i>inherently designed to be shared with the world.</i> From a risk assessment standpoint, these photos are only designed to be shared with friends. Could your friends do something vindictive with the links? Sure, but in the context of Facebook it&#x27;s (reasonably!) assumed that you trust your friends enough to not use that access against you. From that perspective, the security measures are optimized for ensuring that only a subset of people (based on the ACL policy) are authorized to view them, not &quot;no one but me.&quot; Let&#x27;s also not forget that even if Facebook designed a much more stringent ACL, your vindictive friends could simply download the image after they are able to view it, and distribute it that way.<p>Security vulnerabilities do not exist in isolation. They are logical failures between a software&#x27;s design and implementation, or they are risk failures in a software&#x27;s design. And even if this presented an issue from a risk assessment perspective, this still does not constitute an application security vulnerability, which means it&#x27;s not eligible for a bug bounty anyway. In short, you have quite a few bars to pass in order to demonstrate that this is an issue at all.<p>Finally, I&#x27;d like to hear from you what you would do differently. How would you design a CDN for Facebook with a superior security model that matches the same scale and usability requirements?
Why Java Sucks – Jonathan Gardner
Yes, java Sucks. But in many real and very important ways it sucks less than any of the alternatives. Especially considering the age of the language, and how backwards compatible it has been over the last 12 years.<p>Sure the eco system went all XML, but to be honest is the current JSON craze so much better? Yet this XML heritage is not something one must suffer through these days.<p>GC pauses, know your tools. Azul Vega was a thing in 2005 which means GC pauses are an option not a requirement. Of course there is the whole family of JavaRT as well and things like IBM metronome. If you have an big project burning engineering hours on Java GC they should look further afield. Sure most of these tools cost money. You think the extra hardware needed to run most of this stuff in Perl is cheap?<p>Considering where python and perl went in their 5-&gt;6 and 2-&gt;3 migrations even Java 1.4 to 1.5 was smooth. Heck even the annoying into of assert as a keyword in 1.4 was painless compared to those. I know of 2 projects that even with much more man power thrown at it have difficulty moving from Perl 5.8 to 5.12 and beyond. Gosh its like WebSphere migrations from 3 to 4 where unique in that regards, not...<p>Look I understand, we all hate that crap java code some consultants and solution experts wrote and that we need to deal with. But to be honest, equivalent consultants and solution experts with Perl, C, Python or JS don&#x27;t create any better solutions. In someways Java pays the price of it being practically so much better that these teams in their enterprise constrains actually get something out of the door. Unlike, many other teams. Crap that ships is much better than the stuff that never got anywhere.<p>The hasNext and next methods are very nice. Because the author assumes all calls to next must be consumed. Well that is not true, and then having hasNext return a boolean instead of an expensive object (e.g due to remote calls) allows for lots of nice optimisations. Throwing exceptions for normal results is a horror that is too common in Python, e.g. urllib2 and 302 redirects.<p>This actually is rant of someone who does not really seem to know better. Yes there are some good points in it regarding generic erasure. And it is a pain, but on the other hand it made the migration possible.<p>Hey if you don&#x27;t like longs and int casting rules just use BigInteger everywhere. Sure its slow, but so is the cpython and mri ruby implementations. 32bits is not the same as 64 and the Java Memory Model can be a rude awakening if you use longs where you should have used ints.<p>The camelCase thing, use an better tool ctrl-alt-R and its done consistently and correctly.<p>Things that would be nice too have in Java do exist. Getting Function etc.. in Java8 would have been great a decade ago. Rust borrow checking would be nice as well as a change in that locks operate (i.e. consuming a function that is called on the data) instead of the otherway round (i.e. like Perl6 locks)<p>The advice to use Vanilla C as an option. A language where the equivalent of following is undefined and can do anything? Including rm -rf!!! Sure no implementation calls rm -rf on undefined behaviour, except every hacker coming in via a memory corruption bug. A class of bugs that is not present in other languages, and why it should not be your first port of call in the normal circumstances.<p><pre><code> public class WhyIsThisUndefinedInC { int[] a = new int[]{Integer.MAX_VALUE}; public test() { new Thread(()-&gt;(a[0]++).start(); new Thread(()-&gt;(a[0]++).start(); System.out.println(a); } } </code></pre> In any well defined language you can have 3 outcomes, considering the total lack of locking etc... C gives you infinite! Heck moving a small C code base to a new compiler&#x2F;libc combo is a massive undertaking in technical debt reduction. Any of the other languages, except haskel lack a memory model that is sane and useable (or any at all). When you can easily have 144 cores in single box that is not a great place to be in!<p>Sure we all hate Oracle, and their Sales teams. But on the other hand most of the time the customers deserve the sales team. Oh yeah I know its dirty to actually make money on Software it needs to be sponsored by privacy invading ad companies, or written by PostDocs with a lack of job stability. The JVM is solid, and WORA is true for certified Java TCK passing implementations. Android is not one of those.
LaunchDarkly gets $8.7M to put the right features in front of right users
Feature flags certainly aren&#x27;t new -- anyone who has been into SaaS and staged rollouts of production changes likely has built their own or leveraged libraries that implement feature flags. However, the downside of homegrown or library-based models soon becomes visible once you have a more focused organization where each group focuses on what it does best (product vs engineering vs customer success). As Joel Spolsky asserts in defense of NIH syndrome (<a href="https:&#x2F;&#x2F;www.joelonsoftware.com&#x2F;2001&#x2F;10&#x2F;14&#x2F;in-defense-of-not-invented-here-syndrome&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.joelonsoftware.com&#x2F;2001&#x2F;10&#x2F;14&#x2F;in-defense-of-not-...</a>), <i>do</i> build what is critical to your business. For a business that revolves around user modeling&#x2F;behavior&#x2F;marketing, it would make some sense to build your own. For example, Mixpanel or Totango could probably use their own internal segmentation engines to target users for new features, thereby &#x27;eating their own dogfood&#x27;. I work at Upserve, where we provide software to help restaurants and bars run their businesses better. We are obviously not directly in the business of producing feature flags for our customers, but we leverage the hell out of them.<p>Here are a few key benefits I&#x27;ve seen as an avid user of LaunchDarkly, that are end up costing a lot to build on your own:<p>- ability to have people outside of engineering control who gets access to what features. Specifically, product management should have control over what features are available, and they probably don&#x27;t want to go into a dev console to make the changes. Sure, you could build a UI on top of home grown FFs, but why spend valuable time on this?<p>- rules-based user targeting -- rather than embedding ugly logic into your code about what type of user should be served a feature, you can handle most of that within LaunchDarkly&#x27;s UI. For example, we have some restaurant groups that have 100&#x27;s of locations. Rather than setting values at the location or &lt;gasp&gt; user-level, I can simply set a flag at the restaurant group level and that automatically propagates to the user level. Sure, you could build this hierarchy into your homegrown tool, but I guarantee doing that in a very flexible manner won&#x27;t be a slam dunk, and you&#x27;ll soon find that you will want to change it and have to invest more time on something that may not be core to your business.<p>- custom rollout -- you can roll out by % of users or user properties. For instance I can roll out to 10% of users that are in Rhode Island, US. Did you really add this to your homegrown FF service? I hope you didn&#x27;t tell your VCs that if so...<p>- centralized feature flag service -- we leverage across dozens of components, and use feature flags centrally on our cloud servers and on 1000&#x27;s of agents installed inside our customers&#x27; locations (and on hardware that we don&#x27;t own). Since LaunchDarkly uses server-sent events (SSE) this introduces no additional latency in making network calls to a central server. Changes to users and flags are sent streaming, seamlessly, behind the scenes, with minimal network impact. This key infrastructure decision that the team at LaunchDarkly made has given us confidence of using feature flags wherever we could feasibly imagine them without having any adverse impact to us or our customers. Oh, did you remember to build that into your homegrown FF solution?<p>- multivariate flags -- LaunchDarkly supports multivariate flags. For example, we throttle data transport for some of our remote agents. We can control this centrally so that certain agents only send back certain amounts of data (e.g. 1MB, 5MB, 10MB per batch of data sent). This actually extends the concept of FeatureFlags -- there are discrete variations within a single &#x27;flag&#x27;. Don&#x27;t forget to build that into your homegrown solution too.<p>- feature flag chaining -- you can specify a prerequisite for a feature flag. For example, for a rebranding of an entire website, you could have individual feature flags for each page, and then have a &#x27;big bang&#x27; flag that does the entire thing at once. This allows you to test and manage &#x27;micro features&#x27; but then turn on the &#x27;major feature&#x27; easily.<p>- they have thought a lot about the technical debt that feature flags can introduce. How often have you added a feature flag only to not use it again or forget to remove it? It can make your code base ugly in a hurry. LaunchDarkly has analytics that help you identify usage paths as well as &#x27;flag the flag&#x27; so you can be more conscious about what is a temporary vs permanent flag. This might not sound like much, but it prevents questions later about &#x27;can we remove this flag&#x27; without causing impact. You&#x27;ve future proofed your homegrown solution to grow into this too, no doubt.<p>- organizational visibility and integrations with third party services -- LaunchDarkly has many integrations already built into it. For example, we use it to post to our #product channel in slack, so we can see when a new feature becomes available and when someone changes an existing flag&#x2F;adds users to it. This has made communication about features much less time intensive because anyone can see the current state of a feature. You didn&#x27;t really build a Slack integration or create a bot just for your homegrown feature flag service, did you?<p>If you think you&#x27;ll eventually be in the place where what I&#x27;ve spoken about would be helpful, I would strongly suggest you consider a service like LaunchDarkly early in your development so that you can focus your energy on the right things and take advantage of the hard work that LD is putting into their service.
The Disadvantages of an Elite Education (2008)
as someone whose parents never went to college, and I went halfway through highschool from the nations top 50 worst highschools (time magazine did an article on that) to a boarding school on scholarship and then got scholarship to a private top ranked institution (not ivy so &quot;second tier&quot; because top 20 universities instead of just ivy), and while I wouldnt redo the past for a second, to be catapulted into a completely different world like this literally overnight while driving home from boarding school to my poor parents fighting and my mom dealing with alcholism and stressing about how to pay the $50 application fee just to apply to college, and pivot between that everyday really does make you see how disparate the world is.<p>I loved my boarding school because above all the normalcy led to emotional stability so kids could focus on themselves instead of drama. very little drama, very little dating. But everyone was actually very nice and happy. At the same time I felt like an alien even if noone intended it, because it was very obvious 98% of the students there had no clue what an elite system they were in. It also made it difficult to deal with conversations outside of school, like say going to my parents friends house for dinner. &quot;oh you go to THAT school, good for your for being on scholarship. I bet the kids there are spoiled brats. how do you DEAL WITH THEM!?&quot; and i would awkwardly have to explain that they were the nicest people I had ever met, and people went out of their way to make me feel included.<p>It was like switching between two alternate realities everyday, when the entire population of each world you pivoted between had no idea the other existed, and if it was mentioned, each world mentioned the other in immediate disdain, while perhaps the boarding school world had the more appropriate and politically polished &quot;oh&quot; to anyone who was not at our school. It was not even disdain just kind of like, a blankness in being able to relate to the outside world, but it was not because they were horrible or mean, just that we were all very busy, and to be clear, i much preferred the elite world I lived in, because everyone was so nice and I did feel more at home there than anywhere else in the world.<p>The gates and doors I had at my own elite beautiful campus of a boarding school were by far the nicest buildings i had ever been in, and it made me feel safe and loved and protected, a world that took care of me and cared about me more than anyone else ever had in the world.<p>This has been an advantage for me going forward. I got a degree in Electrical Engineering and ComputerScience, I also happen to be female, so being an alien&#x2F;minority in almost every aspect has become normal for me.<p>But I can say ive adjusted so much better in the &quot;real world&quot; where competition is high in engineering and software dev, and noone is coddling you. Alot of my friends who took the same majors as me and went to get a job after their Beachelors like I did, had severe adaptation problems and it almost seemed childish to me but I realized I had lived my whole life in lower middle class life, I could carry on a conversation with anyone, and I was not afraid of venturing outside of my comfort zone. That may be a little bit gratiuitous, obviously I dont speak every language and things do scare me and I deal with the imposter syndrome almost daily but in general my ability and willingness to talk to anyone I meet while I&#x27;m out, and be creative in the jobs I want and understand complex economic and social impacts of the industries I work is much more developed.<p>Ultimately the key to evolution is adaptation and I would say I have more emotional adaptability to people and have a more intuitive understanding of economic impacts that have caused so much strife between upper and lower class, the dissapearing middle class, in a more human way.<p>Because while each of these worlds seem to be (in my opinion) more curious and therefore scared or clueless about the other world which they don&#x27;t understand, inherently more judgmental without meaning to be, how i see each one is with personal memories of friends, real people, christmas dinners with the poor neighbors and also the incredibly rich girl I knew in boarding school who now interns at Louis Vuitton, and when other people assume the worst about her, I remember how her family let me live at their house when my parents were going through rough times, and her parents treated me as one of their own, and never made me feel inadequate, took my mind of horrible situations at home by bringing me along with their families to fundraisers and elaborate parties where I had a neverending choice of designer gowns to choose from &quot;my closet is your closet&quot; so in no way I feel left out. I remember her genorosity more than her evasive richness, I remember her heart to give and inclusiveness more than I remember her designer clothes, I remember her sense of humour more than I remember her aloofness to the daily grind of the outside world.<p>And to the real world I grew up in, I remember the extreme struggles of my parents as children and have grace for their inadequacies emotional and financial raising me, I remember the frustration of the neighbor who had been laid off due to a series of globalizing optimizations in his industry that were beyond his time and energy and education to understand, and how it culminated in judgement and frustration in an assumption at dinner about the kids at my school.<p>I try to remember the humanity in these people and it is much easier to do when you have personal experiences to associate on both ends of the spectrum.<p>I definitely think everyone could stand more to put themselves outside of their comfort zone, and habitat for humanity on a Saturday for a resume booster is not exactly going to provide the intimate and emotional experiences needed to understand how complex and divided we are as a nation, and just how deep cultural influences play into our prejudices against the unknown.
Compile-Time and Runtime-Safe Replacement for “printf” (2015)
GCC&#x27;s printf annotation can give you compile-time safety.<p>You can get run-time type checking in C using C99 variadic macros and C11 _Generic expressions. Here&#x27;s a proof-of-concept I threw together earlier this year in a moment of boredom. I only tested it with a few versions of clang and GCC.<p><pre><code> #include &lt;assert.h&gt; #include &lt;errno.h&gt; #include &lt;stdarg.h&gt; &#x2F;* va_list va_arg va_end va_start *&#x2F; #include &lt;stdio.h&gt; #include &lt;string.h&gt; #include &lt;netinet&#x2F;in.h&gt; #define STRFMT_CALL(F, ...) F(__VA_ARGS__) #define STRFMT_NARG_(a, b, c, d, e, f, g, h, i, j, N,...) N #define STRFMT_NARG(...) STRFMT_NARG_(__VA_ARGS__, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0) #define STRFMT_PASTE(x, y) x##y #define STRFMT_XPASTE(x, y) STRFMT_PASTE(x, y) #define STRFMT_STRINGIFY_(s) #s #define STRFMT_STRINGIFY(s) STRFMT_STRINGIFY_(s) enum strfmt_type { STRFMT_NONE = 0, STRFMT_INT, STRFMT_UINT, STRFMT_LONG, STRFMT_ULONG, STRFMT_LLONG, STRFMT_ULLONG, STRFMT_FLOAT, STRFMT_DOUBLE, STRFMT_LDOUBLE, STRFMT_CHAR_P, STRFMT_SCHAR_P, STRFMT_UCHAR_P, STRFMT_VOID_P, STRFMT_STRUCT_IN_ADDR_P, STRFMT_STRUCT_IN6_ADDR_P, STRFMT_END_P, }; #define STRFMT_END (&amp;(struct strfmt_end){ 0 }) struct strfmt_end { int _; }; extern int strfmt_unknown_expression_type; #define strfmt_badtype(X) strfmt_unknown_expression_type #define strfmt_promote(X) ((0)? 0 : (X)) #define strfmt_typeof_p_g(X, Q, def) \ _Generic(strfmt_promote(X), \ Q char *: STRFMT_CHAR_P, \ Q signed char *: STRFMT_SCHAR_P, \ Q unsigned char *: STRFMT_UCHAR_P, \ Q void *: STRFMT_VOID_P, \ Q struct in_addr *: STRFMT_STRUCT_IN_ADDR_P, \ Q struct in6_addr *: STRFMT_STRUCT_IN6_ADDR_P, \ Q struct strfmt_end *: STRFMT_END_P, \ default: (def)) #define strfmt_typeof_p(X) \ strfmt_typeof_p_g((X), const, \ strfmt_typeof_p_g((X), , strfmt_badtype(X))) #define strfmt_typeof(X) \ _Generic(strfmt_promote(X), \ int: STRFMT_INT, \ unsigned: STRFMT_UINT, \ long: STRFMT_LONG, \ unsigned long: STRFMT_ULONG, \ long long: STRFMT_LLONG, \ unsigned long long: STRFMT_ULLONG, \ float: STRFMT_FLOAT, \ double: STRFMT_DOUBLE, \ long double: STRFMT_LDOUBLE, \ default: strfmt_typeof_p(X)) #define STRFMT_EXP1(X) strfmt_typeof(X), STRFMT_STRINGIFY(X), (X) #define STRFMT_EXP2(X, ...) STRFMT_EXP1(X), STRFMT_EXP1(__VA_ARGS__) #define STRFMT_EXP3(X, ...) STRFMT_EXP1(X), STRFMT_EXP2(__VA_ARGS__) #define STRFMT_EXP4(X, ...) STRFMT_EXP1(X), STRFMT_EXP3(__VA_ARGS__) #define STRFMT_EXP5(X, ...) STRFMT_EXP1(X), STRFMT_EXP4(__VA_ARGS__) #define STRFMT_EXP6(X, ...) STRFMT_EXP1(X), STRFMT_EXP5(__VA_ARGS__) #define STRFMT_EXP7(X, ...) STRFMT_EXP1(X), STRFMT_EXP6(__VA_ARGS__) #define STRFMT_EXP8(X, ...) STRFMT_EXP1(X), STRFMT_EXP7(__VA_ARGS__) #define STRFMT_EXP9(X, ...) STRFMT_EXP1(X), STRFMT_EXP8(__VA_ARGS__) #define STRFMT_EXPAND(...) STRFMT_CALL(STRFMT_XPASTE(STRFMT_EXP, STRFMT_NARG(__VA_ARGS__)), __VA_ARGS__) #define strfmt_(dst, lim, error, fmt, ...) strfmt((dst), (lim), (error), (fmt), STRFMT_EXPAND(__VA_ARGS__)) #define strfmt(...) strfmt_(__VA_ARGS__, STRFMT_END) size_t (strfmt)(void *dst, size_t lim, int *_error, const char *fmt, ...) { enum strfmt_type type; const char *exp; va_list ap; int error; *_error = 0; strlcpy(dst, &quot;&quot;, lim); va_start(ap, fmt); do { type = va_arg(ap, enum strfmt_type); exp = va_arg(ap, char *); switch (type) { case STRFMT_INT: va_arg(ap, int); strlcat(dst, &quot;(int)&quot;, lim); break; case STRFMT_UINT: va_arg(ap, unsigned); strlcat(dst, &quot;(unsigned)&quot;, lim); break; case STRFMT_LONG: va_arg(ap, long); strlcat(dst, &quot;(long)&quot;, lim); break; case STRFMT_ULONG: va_arg(ap, unsigned long); strlcat(dst, &quot;(unsigned long)&quot;, lim); break; case STRFMT_LLONG: va_arg(ap, long long); strlcat(dst, &quot;(long long)&quot;, lim); break; case STRFMT_ULLONG: va_arg(ap, unsigned long long); strlcat(dst, &quot;(unsigned long long)&quot;, lim); break; case STRFMT_FLOAT: va_arg(ap, double); strlcat(dst, &quot;(float)&quot;, lim); break; case STRFMT_DOUBLE: va_arg(ap, double); strlcat(dst, &quot;(double)&quot;, lim); break; case STRFMT_LDOUBLE: va_arg(ap, long double); strlcat(dst, &quot;(long double)&quot;, lim); break; case STRFMT_CHAR_P: va_arg(ap, char *); strlcat(dst, &quot;(char *)&quot;, lim); break; case STRFMT_SCHAR_P: va_arg(ap, signed char *); strlcat(dst, &quot;(signed char *)&quot;, lim); break; case STRFMT_UCHAR_P: va_arg(ap, unsigned char *); strlcat(dst, &quot;(unsigned char *)&quot;, lim); break; case STRFMT_VOID_P: va_arg(ap, void *); strlcat(dst, &quot;(void *)&quot;, lim); break; case STRFMT_STRUCT_IN_ADDR_P: va_arg(ap, struct in_addr *); strlcat(dst, &quot;(struct in_addr *)&quot;, lim); break; case STRFMT_STRUCT_IN6_ADDR_P: va_arg(ap, struct in6_addr *); strlcat(dst, &quot;(struct in6_addr *)&quot;, lim); break; case STRFMT_END_P: va_arg(ap, struct strfmt_end *); break; default: fprintf(stderr, &quot;unknown type:%d\n&quot;, (int)type); error = EINVAL; goto error; } if (type != STRFMT_END_P) { strlcat(dst, exp, lim); strlcat(dst, &quot;, &quot;, lim); } } while (type != STRFMT_END_P); va_end(ap); return strlen(dst); error: va_end(ap); return (*_error = error), 0; } int main(void) { char buf[1024]; const char *const const_buf = buf; int error; strfmt(buf, sizeof buf, &amp;error, &quot;(fmt)&quot;, &quot;string literal&quot;, buf, const_buf, (long double)0.0, (&amp;(struct in_addr){ INADDR_LOOPBACK, }), (uint64_t)64, __builtin_nan(&quot;&quot;) ); puts(buf); return 0; }</code></pre>
How I Write Tests
Having responded to several comments here, I am concerned about the fact that most of the discourse here seems to fail to completely understand what the goals are of unit testing - and, worse, many comments, despite this omission, seem to be made with an air of confidence which I could see myself, when I was a junior developer, accepting as reliable, because of that tone. As of this writing, I feel that anyone new to unit testing that comes across this overall discussion will be sent down the wrong path and may not realize it for a very long time, and so I feel that it is important to outline what I feel are the most serious misconceptions about unit testing I see here.<p>* Code coverage&#x27;s value: code coverage is not a goal in and of itself. Seeing 100% code coverage should not make you feel comfortable, as a statistic, that there is adequate testing. If you have 100% coverage of branching, you might have indeed verified that the written code functions as intended in response to at least some possible inputs, but you have not verified that all necessary tests have been written - indeed, you cannot know this from this simple metric. To give a concrete example: if I write one test that tests only a good input to a single function in which I have forgotten a necessary null check, I will have 100% code coverage of that function, but I will not have 100% behavioral coverage - which brings me to the following point.<p>* What to think about when unit testing a function, or how to conceptualize the purpose of a unit test: unit tests should test behavior of code, so simply writing a unit test that calls a function with good input and verifies that no error is not in the correct spirit of testing. Several unit tests should call the same function, each with various cases of good and bad input - null pointer, empty list, list of bogus values, list of good values, and so on. Some sets of similar inputs reasonably can be grouped into one bigger unit test, given that their assert statements are each on their own line so as to be easily identifiable from error output, but there should nevertheless be a set of unit tests that cover all possible inputs and desired behaviors.<p>* Unit test scope: A commenter I responded to in another thread had given criticism along the lines of that by making two unit tests which test cases A and B entirely independent, you fail to test the case &quot;A and B&quot;. This is a misunderstanding of what the scope of a unit test should be in order to be a good unit test - which, incidentally goes along with misunderstanding the intent of a unit test. A unit test, conceptually, should check that the behavior of one piece of functional code under one specific condition is as intended or expected. The scope of a unit test should be the smallest scope a test can without being trivial; we write unit tests this way so that a code change later that introduces a bug will hopefully not only be caught, but be caught with the most specificity possible - test failures should the engineer a story along the lines of &quot;_this_ code path behaved incorrectly when called with _this_ input, and the error occurs on _this_ line&quot;. More complex behavior, of the sort of &quot;if A and B&quot;, is an integration test; integration tests are the tool that has been developed to verify more complex behavior. If you find yourself writing a unit test that is testing the interaction of multiple variables, you should pause to consider whether you should not move the code you are writing into an integration test, and write two new, smaller unit tests, each of which verifies behavior of each input independent of another.<p>* Applying DRY to test setup: if you abstract away test setups, you are working against the express intention of each unit test being able to catch one specific failure case, independently of other tests. Furthermore, you are introducing the possibility of systematic errors in your application in the _very possible_ case of inserting an error in the abstractions you have identified in your test setup! Furthermore, f you find yourself setting up the same test data in many places, that should not suggest to you to abstract away the test setup - to you, it should rather hint at what is likely a poor separation of concerns and&#x2F;or insufficient decoupling in your software&#x27;s design. If you are duplicating test code, check whether you have failed to apply the DRY principle in your application&#x27;s code - don&#x27;t try to apply it to the test code.<p>And, in my opinion, the most important and common misconception I see here, and I really feel that it should be more widely understood - and, in fact, that many problems with legacy code will likely largely stop occurring if this mindset becomes widespread:<p>* Why do we write unit tests?<p>We write unit tests to verify the behavior of written code with respect to various inputs, yes. But that is only the mechanics of writing unit tests, and I fear that that is what most people think is the sole function of unit tests; behind the mechanics of a method there should be a philosophy, and there is.<p>Unit tests actually serve a potentially (subjectively, I would say &quot;perhaps almost always&quot;) far more vital purpose, in the long term: when an engineer writes unit tests to verify behavior of the code he has written, he is, in fact, writing down an explicit demonstration what he intended the program to _do_; that is, he is, in a way, leaving a record of the design goals and considerations of the software.<p>(Slight aside: in my opinion, being a good software engineer does _not_ mean you write a clever solution to a problem and move on forever; rather, it means that you decompose the problem into its simplest useful components and then use those components to implement a solution to the problem at hand whose structure is clear by design and is easy for others to read and understand. It further means (or should mean) that you then implement not only verification of the functionality you had in mind and its robustness to invalid inputs which you cannot guarantee will never arrive, but also implement in such a way that it indicates what your design considerations were but serves as a guard against a change that unknowingly contradicts these considerations as a result of a change made by someone else (or yourself!) at a later time.<p>Later, when the code must be revisited, altered, or fixed, such unit tests, if well-written, immediately communicate what the intended behavior of the code is, in a way that cannot be as clearly (or even necessarily, almost definitely not immediately) inferred from reading the source code.<p>In summary, these are the main points that stuck out to me in the conversations here; I do want to emphasize that the last point above is, in my opinion, the most glaring omission here, because it is an overall mindset rather than a particular consideration.
Ask HN: How to learn new things better?
With any new skill you need to be willing to sink a lot of time into it. And you need to be fine with being <i>absolutely terrible</i> at it in the beginning.<p>I usually tell people who want to learn to draw to go to <a href="http:&#x2F;&#x2F;johnkcurriculum.blogspot.com&#x2F;2009&#x2F;12&#x2F;preston-blair-lessons-fundamentals-of.html" rel="nofollow">http:&#x2F;&#x2F;johnkcurriculum.blogspot.com&#x2F;2009&#x2F;12&#x2F;preston-blair-le...</a>, get the Preston Blair book, and start doing these exercises by master animator John K (creator of &#x27;Ren &amp; Stimpy&#x27;). You will get a lot better, a lot faster. These exercises focus on simple cartoon characters who wear a lot of their construction on the outside; once you can draw cartoon characters, you can keep drawing more of them if that&#x27;s your thing, or you can build on top of that and start learning anatomy and drawing more complicated characters. (Or do both.)<p>There&#x27;s other well-regarded drawing courses on the internet and someday I should probably pick a new one to send noobs to, what with John K kind of being an asshole - but I learnt a hell of a lot when I worked under him, and he is really good at teaching this stuff.<p>Most of what I know about drawing more complicated figures came from a combination of Bridgeman&#x27;s &quot;Constructive Anatomy&quot; and Loomis&#x27; &quot;Figure Drawing for All It&#x27;s Worth&quot;, and a life drawing teacher who hewed very closely to Glen Vilppu&#x27;s drawing manual. If you can fit some life drawing classes into your life then TAKE them, you will learn a ton.<p>Also: Make a space in your life to do this. I ride the bus a lot, and before the advent of smartphones, I&#x27;d have little to do to amuse myself besides stare out of the window, read a book, or pull my sketchbook out and draw. Maybe draw some idea floating around my head, maybe draw something I glimpsed out the bus window, maybe something based on my fellow passengers, maybe just some cubes, or the hand I wasn&#x27;t drawing with. I got a lot of practice in without feeling like I was making myself &quot;practice&quot;. Whatever you may be learning, if you regularly drop yourself into a time and place with nothing much to do besides the thing you wanna learn, then you&#x27;ll do it more often.<p>Don&#x27;t blow several hundred bucks on a ton of paints, or on pro software and a Wacom tablet. Just start with a few hardback sketchbooks and some pens and pencils. Oh, and not mechanical pencils. Just grab like a pack of Ticonderoga 2.5Bs, they&#x27;re cheap and pretty good. And try holding them so that the <i>side</i> of the point addresses the paper for a lot of the beginning of your drawing; this will do several things for you:<p>* it will train you to keep your wrist fairly steady, and to draw more with your entire arm; keeping your wrist straight and steady will help keep the Carpal Tunnel Fairy away. * it will make your initial lines light, and prone to fade away as your hand brushes the paper; this keeps you from bearing down to gouge an impossible-to-erase line in the paper, and gives you more room to make mistakes before having a dark, illegible mess of lines you can&#x27;t draw over.<p>Don&#x27;t get lost in trying to save a drawing, either. Paper&#x27;s cheap, turn the page and try the same subject again, or a new one.<p>When you make a picture you like, hang it over your drawing board, turn it into your computer&#x27;s backdrop, and keep trying to draw something better than it. You may find yourself hating it because you start seeing all the mistakes. That&#x27;s great - go draw something new that doesn&#x27;t make those! (This may take many tries, some mistakes are harder to stop making than others.)<p>Don&#x27;t worry about &quot;your style&quot;. If someone points out a mistake in your drawing and you find yourself wanting to say &quot;but that&#x27;s my style!&quot;, then you are just covering up your weaknesses unless you can actually sit down and bust out a version of the drawing that Does It Right. When you can do that you can legitimately say &quot;dis mah style&quot;. Steal stylizations from artists you love (you&#x27;re looking at other people&#x27;s art, right? A lot?), make your own based on reality.<p>You will find a lot of people declaring &quot;rules&quot; of drawing. Always do this, never do that. The truth of the matter (IMHO) is that <i>all rules of art are actually just warnings:</i> &quot;never do this&quot; really means &quot;if you do this without thinking about what you&#x27;re doing it&#x27;ll probably turn out badly&quot;. Know the rules, know which ones you&#x27;re breaking, and break the <i>fuck</i> out of them while staying well within the boundaries of the other rules you know.<p>(I spent a decade in the LA animation scene, then burnt out and draw comics now. If you wanna look at my work to decide if I&#x27;m someone who you should listen to in this, it&#x27;s all at <a href="http:&#x2F;&#x2F;egypt.urnash.com" rel="nofollow">http:&#x2F;&#x2F;egypt.urnash.com</a>)
Why Many Cities Have No Money
Good piece, but there is another major factor that is missing entirely - the impact of debt financing and continuously rolling bonds. In addition to what the article mentions, this is the other factor contributing to the slow death of our cities. What nearly every municipality in the US has done (all municipalities BTW, not just the &quot;cities&quot;) is to take on a debt load to finance their desired expenditure, whether it be for infrastructure or otherwise - very similar to what our Federal government has done. And at first glance, it makes sense. It&#x27;s a huge expenditure so let&#x27;s finance it and pay it off over time in accordance with tax revenues. The problem is that they almost never pay it off. I know that statement sounds crazy at first, but it&#x27;s not. Sure, they pay off bonds as they come due. But they pay them off and roll them into new bonds - that&#x27;s the problem. This has been fueled by steadily decreasing bond rates over the past 30-33 years. This allows the municipalities to roll over their debt at a lower rate when it comes due, which reduces their interest payment and also allows them to borrow more at the same time.<p>Here&#x27;s an example - It&#x27;s 1&#x2F;1&#x2F;1985 and the rate on the 10 year treasury note is 11.65% (yes, that was the real rate), so our city was able to get a rate of, say, 12%. They issue a 10 year bond for $5 million to build a school and some road maintenance. They make the required coupon payments (usually semi-annually) using tax revenues - and that amounts to $600,000 each year. They keep doing that until just before the principal comes due in full on 1&#x2F;1&#x2F;1995. The idiots running the city haven&#x27;t saved the 5 million required to pay off the principal so what do they do? Well they take a look at the market and see that the 10 year treasury note is now 7.19% so they can get around 7.5%. So they issue a new bond for $5 million for another 10 years and now only have to pay $375,000 in interest coupons every year. But they are still collecting 600k at the given tax rates which leaves them 225k in the green. So they can either spend that every year or they can use that as the coupon payment on an addition $3 million bond and they get it right now! Even if some scrupulous treasurer or township board member were to say that they should just spend the 225k and not take on any additional debt, someone will point out to them that over 10 years that&#x27;s $2.25 million and here they get to instead use that money to spend a total of $3 million - these guys feel like they are making money taking on debt. And as long as interest rates keep going down and they don&#x27;t need to repay principal, it all works so they agree. Now 1&#x2F;1&#x2F;2005 rolls around - they owe $8 million but rates are 4.5% so they can get 4.75%. So they rollover the $8 million for a yearly coupon of 380k. They also use the remaining 220k to open up a new bond for ~$4.6 million (4,631,578.95 to be precise) - and now they can still pay the same 600k in interest that they have been for decades but they have an outstanding debt issuance of $12.6 million.<p>Now, if you are still reading this far and fully grasp the horror of the above, you can begin to understand one of the many reasons rates can&#x27;t go too far up anytime soon. We have been Japan&#x27;ed, and this is just part of the reason of how it happened. Also, I want to point out that in the above example no increase in the tax rate is needed to help pay for this, even without population growth at all. Tack on population growth, productivity and technological progress, as well as the increase of the money supply by the federal reserve over that time period and in real terms it becomes even less. Now think about how much your taxes have increased on a percentage basis over the same period on the state and municipal levels and it should become clear just how horribly mismanaged everything has been for quite some time. Rates can&#x27;t really go much further south so even if they hold constant and never increase...all of our broke ass cities and towns will only be able to refinance at the same rate (best case scenario). This means they can&#x27;t increase expenditure at all for anything unless they make taxes sky high to support the spending. The free money game is over.
The Dark Path
The main argument here apparently is &quot;lets not add safety features because we have a limited pool of features before the language becomes too complex&quot;. I don&#x27;t think thats a convincing argument. Its a question of which features you value. For example, I&#x27;d totally give up `public, private and protected` to be able to get rid of exception and `null` pointer problems forever.<p>&gt; It is programmers who create defects – not languages.<p>In the case of nulls in typed languages, its the language&#x27;s fault. The language is literally lying to you.<p>Lets say that Java is claiming, for example, that a method always returns a `string`. What is a string? Its a value of a certain class, that supports certain methods that return other values. In structurally typed (&quot;duck typed&quot;) languages, the type can be defined by the methods you can call on the value (the interface it supports)<p>But this claim is just not true. The method may return null, and if you call the string methods on that value, your code may die a violent death. Its clear that the value `null` doesn&#x27;t belong to the type `string`, since it doesn&#x27;t support the contract that a `string` provides. So why is the programmers fault to expect that the type system doesn&#x27;t lie to them? I think the programmer has the right to expect an actual string - not a number, not an `StringLike`, and definitely not null, all of which don&#x27;t fully satisfy the `string` contract.<p>Unfortunately, the language doesn&#x27;t give you the ability to discern between a string that cannot be null and a string-or-null (could be string, but also could be null). Because of this, you need to track all the places that might be null in your head to make sure you check for them, or look up the test code to remind yourself all the time.<p>Its a similar problem with exceptions. The language is claiming that functions that don&#x27;t cause exceptions act the same way as those that do. But thats painfully untrue, and you would find that out the moment you try to `fopen` a file, read something from it, and then `fclose` it.<p><pre><code> f = fopen(&#x27;file.txt&#x27;) data = fread(f) fclose(f) </code></pre> Oops but the `read` call may throw, and you now have a dangling file handle. Good thing you knew that about `fread`! Now how about the other thousand methods in your code? Do you know whether any of those throw?<p>Whats that? You should be using `try-with-resources`, I hear? Okay, then what do I do with this code:<p><pre><code> atomicIncrement(inFlightRequests) result = doRequest() atomicDecrement(inFlightRequests) </code></pre> I should use try-finally. Thats true, but what if doRequest() doesn&#x27;t throw? Then I don&#x27;t need to. Why do those two look the same? Why is the language lying that they are the same?<p>Now lets see whats available in Swift:<p>* `try!` means - I don&#x27;t expect this piece of code to throw, even though it &quot;might&quot;. If it does, crash horribly.<p>* `try` means - I know that this piece of code throws, but I can&#x27;t handle that exception here, so propagate it.<p>* no keyword - This code should not throw. If it returns a Result that may be an error, I want to handle it right here.<p>* `try?` similar as above, but discards the specific error and returns an Optional.<p>Thats it. You manage the risk, the language only makes that explicit. It also makes it really clear whats going on during code review.<p>Here is how the problematic piece of code looks when `doRequest` might throw:<p><pre><code> atomicIncrement(inFlightRequests) result = try doRequest() atomicDecrement(inFlightRequests) </code></pre> The possible bug is now very easy to spot. We don&#x27;t need to remember whether `doRequest` throws or to look that up. Its clear that a finally or defer block is necessary here.<p>Whats even better is how the code looks like when `doRequest` doesn&#x27;t throw!<p><pre><code> atomicIncrement(inFlightRequests) result = doRequest() atomicDecrement(inFlightRequests) </code></pre> Since we know the compiler would force a `try` keyword there if there was the possibility of exception, we know this code is correct just by glancing it. No need for `defer` or `finally`.<p>So are these two features not worth the cost? Maybe, if you are arguing that we don&#x27;t need types as a tool to prevent errors. In that case, it would indeed be the programmer&#x27;s fault for not writing a test for it.<p>But if you are going to have types, you might as well have types that tell the truth. Otherwise they&#x27;re useless.<p>Finally, IMO this is not a good reason to hate on types. Stop complaining that the compiler wont accept your buggy code and take the time to do the changes. I&#x27;m happy when the compiler points out all the places that will be affected by the change - thats something that unit tests with mocks will <i>never</i> catch!<p>Imagine that a change in doRequest caused it to throw, where previously it did not. The compiler will now tell you about ALL places that may be affected. What would happen if it didn&#x27;t? The `inFlightRequests` counter will suddenly start behaving strange and increasing indefinitely. What would happen if above a certain value new requests don&#x27;t get queued? A bug that leads to denial of service.<p>Is it worth the hassle of changing all affected code? I don&#x27;t know, what do you think?<p>There are much better complaints against type systems, like the fact that most are not able to understand certain advanced styles of metaprogramming yet. Or that it doesn&#x27;t do control flow analysis, e.g. if I type<p><pre><code> if (x != null) { doCode(x); } </code></pre> then `x` should have its nullability removed in the block<p>or<p><pre><code> if (x == null) { x = nonNullValue; } </code></pre> then below this line, x should not be considered nulable. TypeScript for example does both of the above.
Is Semantic Versioning an Anti-Pattern?
I started using semver about two years ago in two of my projects---one a library and another an application. For my library, the versions run (output from &quot;git tag -n&quot;):<p><pre><code> 6.3.0 Bump version number to 6.3.0. 6.3.1 The &quot;Remake the Makefile&quot; Version 6.3.2 Bug fix---use $(RM) instead of &#x2F;bin&#x2F;rm 6.3.3 Bug fix---the &quot;all&quot; target does not depend upon &quot;depend&quot;. 6.3.4 Bug fix---add restrict to some parameters. 6.3.5 Bug fix---fix compiler warnings from CLang. 6.3.6 Bug fix---guard against some possibly undefined defines. 6.3.7 Bug fix---update dependencies in Makefile 6.4.0 The &quot;Trees For The Nodes&quot; Version 6.5.0 The &quot;PairListCreate()&quot; Version 6.6.0 The &quot;Go for Gopher URLs&quot; Version 6.6.1 Bug fix---use c99 instead of gcc 6.6.2 Bug fix---use $(DESTDIR) when installing 6.6.3 Bug fix---replace malloc()&#x2F;memset() pair with calloc() 6.7.0 The &quot;Christmas Cleanup&quot; Version 6.8.0 The &quot;Breaking Up Is Hard To Do&quot; Version 6.8.1 Bug fix---potential buffer overwrite. 6.8.2 Bug fix---add missing headerfile. </code></pre> No APIs have changed, but each X.Y.0 has added new functions (with the exception of 6.8.0, which changed the source layout but not the API one bit), and each .n release has been a bug fix. I don&#x27;t have much of an issue with semver for library code.<p>For the application, I&#x27;ve found semver not to be much of a win. The versions:<p><pre><code> v4.6.0 The &#x27;XXXX FaceGoogleMyTwitterPlusSpaceBook&#x27; Version v4.6.1 Bug fix---double free v4.6.2 Bug fix---if not using email notification, code doesn&#x27;t compile v4.6.3 Bug fix---don&#x27;t use _IO_cookie_io_functions_t v4.6.4 Bug fix---potential double free (yet again). v4.6.5 Bug fix---encoded entries via email mess things up. v4.6.6 Bug fix---unauthorized person posting via email leads to double fclose() v4.6.7 Bug fix---a NULL tumbler crashes the program. v4.7.0 The &#x27;Tumblers Reloaded&#x27; Version v4.7.1 Bug fix---date checking on exiting tumbler_new() was borked. v4.7.2 Bug fix---previous and last calculations were borked. v4.7.3 Bug fix---check tumbler date(s) against last entry, not the current time v4.7.4 Bug fix---current link was wrong v4.7.5 Bug fix---the assert() when comparing dates was wrong v4.8.0 The &#x27;Constant Upgrade&#x27; Version v4.9.0 The &#x27;Unused API Paramters&#x27; Version v4.9.1 Bug fix---getline() called with incorrect parameters. v4.9.2 Bug fix---dependencies got out of whack. v4.9.3 Bug fix---used the wrong name when generating the tarball. v4.9.4 Bug fix---removed compiler warnings under different compiler options. v4.9.5 Bug fix---assert() was too assertive. v4.9.6 Bug fix---I was a bit too assertive in the callback code. v4.9.7 Bug fix---fix header guards. v4.10.0 The &#x27;Spiffier Email&#x27; Version v4.11.0 The &#x27;Go For Gopher&#x27; Version v4.11.1 Bug fix---potential memory leaks fixed v4.11.2 Bug fix---notify_emaillist() was borked v4.11.3 Bug fix---memory corruption v4.12.0 The &quot;Somewhat Arbitrary Christmas Release&quot; Version v4.13.0 The &quot;Target Advertisers&quot; Version </code></pre> In actual use, the &quot;version numbers&quot; could very well be 6.0, 6.1, 6.2, 11.1, 11.2, etc. for all the meaning of &quot;4&quot; has (largly---this is the codebase after the 4th major reworking of the code---it&#x27;s a 17 year old code base). I could see a separate semver standard for applications---basically an X.P model---version, bug fix. Or perhaps a D.X.P model---data format, version, bug fix. If the saved data format changes, change the D number.