User description

Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (guest, #24616) [Hyperlink]1. this WP article was the 5th in a sequence of articles following the security of the internet from its beginnings to related topics of at this time. discussing the safety of linux (or lack thereof) matches nicely in there. it was additionally a effectively-researched article with over two months of research and interviews, something you can't quite claim your self in your current pieces on the topic. you don't just like the information? then say so. and even better, do something constructive about them like Kees and others have been attempting. nevertheless foolish comparisons to previous crap just like the Mindcraft studies and fueling conspiracies do not exactly help your case. 2. "We do an inexpensive job of discovering and fixing bugs." let's start here. is this statement based mostly on wishful considering or cold arduous info you are going to share in your response? according to Kees, the lifetime of safety bugs is measured in years. that is greater than the lifetime of many gadgets folks buy and use and ditch in that interval. 3. "Issues, whether they are security-associated or not, are patched quickly," some are, some aren't: let's not neglect the latest NMI fixes that took over 2 months to trickle all the way down to stable kernels and we also have a consumer who has been waiting for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-systems.btrfs/49500 (FYI, the overflow plugin is the first one Kees is trying to upstream, think about the shitstorm if bugreports will likely be treated with this angle, let's hope btrfs guys are an exception, not the rule). anyway, two examples will not be statistics, so as soon as again, do you may have numbers or is it all wishful pondering? (it is partly a trick question because you'll even have to explain how one thing will get to be decided to be safety related which as everyone knows is a messy business in the linux world) 4. "and the stable-update mechanism makes those patches obtainable to kernel customers." besides when it doesn't. and sure, i've numbers: grsec carries 200+ backported patches in our 3.14 stable tree. 5. "Particularly, the few builders who are working on this space have by no means made a serious attempt to get that work built-in upstream." you don't need to be shy about naming us, in spite of everything you probably did so elsewhere already. and we also explained the the explanation why we haven't pursued upstreaming our code: https://lwn.web/Articles/538600/ . since i do not anticipate you and your readers to learn any of it, here is the tl;dr: if you need us to spend hundreds of hours of our time to upstream our code, you'll have to pay for it. no ifs no buts, that is how the world works, that's how >90% of linux code gets in too. i personally find it pretty hypocritic that properly paid kernel developers are bitching about our unwillingness and inability to serve them our code on a silver platter totally free. and earlier than somebody brings up the CII, go test their mail archives, after some initial exploratory discussions i explicitly asked them about supporting this lengthy drawn out upstreaming work and acquired no answers.Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Link]Cash (aha) quote : > I propose you spend none of your free time on this. Zero. I propose you get paid to do that. And well. Nobody anticipate you to serve your code on a silver platter at no cost. The Linux foundation and huge companies using Linux (Google, Crimson Hat, Oracle, Samsung, and many others.) ought to pay security specialists like you to upstream your patchs.Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Hyperlink]I might just wish to point out that the best way you phrased this makes your remark a tone argument[1][2]; you've got (probably unintentionally) dismissed all the dad or mum's arguments by pointing at its presentation. The tone of PAXTeam's comment shows the frustration built up over the years with the best way issues work which I believe should be taken at face worth, empathized with, and understood fairly than merely dismissed. 1. http://rationalwiki.org/wiki/Tone_argument 2. http://geekfeminism.wikia.com/wiki/Tone_argument Cheers,Posted Nov 7, 2015 0:55 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (visitor, #24616) [Link]why, is upstream identified for its basic civility and decency? have you ever even read the WP publish beneath dialogue, never thoughts previous lkml site visitors?Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Link]Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (guest, #58961) [Link]No ArgumentPosted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]Please do not; it would not belong there both, and it especially does not want a cheering part because the tech press (LWN generally excepted) tends to offer.Posted Nov 8, 2015 8:36 UTC (Solar) by gmatht (guest, #58961) [Link]Ok, but I was considering of Linus TorvaldsPosted Nov 8, 2015 16:11 UTC (Solar) by pbonzini (subscriber, #60935) [Link]Posted Nov 6, 2015 22:Forty three UTC (Fri) by PaXTeam (guest, #24616) [Link]Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Link]Why should you assume only cash will repair this downside? Yes, I agree extra sources ought to be spent on fixing Linux kernel security issues, however do not assume someone giving a corporation (ahem, PAXTeam) cash is the one resolution. (Not mean to impugn PAXTeam's safety efforts.)The Linux improvement community might have had the wool pulled over its collective eyes with respect to security issues (either actual or perceived), but merely throwing money at the problem won't repair this.And yes, I do understand the commercial Linux distros do heaps (most?) of the kernel improvement as of late, and that implies oblique financial transactions, however it's a lot more concerned than just that.Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Hyperlink]Posted Nov 7, 2015 9:Forty nine UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]Posted Nov 6, 2015 23:13 UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]I feel you undoubtedly agree with the gist of Jon's argument... not enough focus has been given to security in the Linux kernel... the article gets that half proper... cash hasn't been going in direction of security... and now it must. Aren't you glad?Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (visitor, #24616) [Link]they talked to spender, not me personally, however yes, this aspect of the coin is properly represented by us and others who have been interviewed. the identical manner Linus is a good consultant of, nicely, his personal pet challenge referred to as linux. > And if Jon had only talked to you, his would have been too. provided that i'm the creator of PaX (a part of grsec) yes, talking to me about grsec matters makes it probably the greatest ways to research it. but when you know of another person, be my visitor and title them, i am fairly positive the just lately formed kernel self-safety of us can be dying to interact them (or not, i do not think there's a sucker out there with 1000's of hours of free time on their hand). > [...]it also contained fairly just a few of groan-worthy statements. nothing is perfect but contemplating the viewers of the WP, this is one in every of the higher journalistic items on the topic, regardless of how you and others don't just like the sorry state of linux security exposed in there. if you want to discuss extra technical particulars, nothing stops you from talking to us ;). speaking of your complaints about journalistic qualities, since a previous LWN article saw it match to include a number of typical dismissive claims by Linus about the quality of unspecified grsec features with no evidence of what experience he had with the code and how recent it was, how come we did not see you or anyone else complaining about the standard of that article? > Aren't you glad? no, or not yet anyway. i've heard a lot of empty phrases through the years and nothing ever manifested or worse, all the money has gone to the pointless train of fixing particular person bugs and related circus (that Linus rightfully despises FWIW).Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Link]Posted Nov 8, 2015 13:06 UTC (Sun) by k3ninho (subscriber, #50375) [Hyperlink]Right now we have obtained developers from large names saying that doing all that the Linux ecosystem does *safely* is an itch that they have. Sadly, the surrounding cultural angle of builders is to hit functional objectives, and occasionally performance targets. Security goals are often missed. Ideally, the tradition would shift so that we make it tough to comply with insecure habits, patterns or paradigms -- that could be a activity that can take a sustained effort, not merely the upstreaming of patches. Regardless of the tradition, these patches will go upstream eventually anyway because the ideas that they embody are actually timely. I can see a strategy to make it happen: Linus will settle for them when an enormous finish-consumer (say, Intel, Google, Fb or Amazon) delivers stuff with notes like 'this is a set of enhancements, we're already utilizing them to solve this type of downside, here is how every part will remain working because $proof, notice carefully that you are staring down the barrels of a fork as a result of your tree is now evolutionarily disadvantaged'. It is a game and may be gamed; I would want that the community shepherds users to follow the pattern of declaring drawback + solution + practical take a look at proof + performance take a look at proof + security test evidence. K3n.Posted Nov 9, 2015 6:49 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]And about that fork barrel: I'd argue it is the opposite manner around. Google forked and misplaced already.Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (guest, #99377) [Hyperlink]Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Hyperlink]So I need to confess to a specific amount of confusion. I may swear that the article I wrote said exactly that, however you've got put a good amount of effort into flaming it...?Posted Nov 8, 2015 1:34 UTC (Sun) by PaXTeam (visitor, #24616) [Link]Posted Nov 6, 2015 22:52 UTC (Fri) by flussence (subscriber, #85566) [Link]I personally think you and Nick Krause share reverse sides of the identical coin. Programming potential and primary civility.Posted Nov 6, 2015 22:Fifty nine UTC (Fri) by dowdle (subscriber, #659) [Link]Posted Nov 7, 2015 0:Sixteen UTC (Sat) by rahvin (guest, #16953) [Link]I hope I'm flawed, however a hostile angle isn't going to help anybody get paid. It's a time like this the place something you appear to be an "expert" at and there is a demand for that expertise the place you show cooperation and willingness to take part as a result of it's a chance. I am relatively shocked that someone does not get that, but I'm older and have seen a couple of of those opportunities in my profession and exploited the hell out of them. You only get just a few of those in the average profession, and handful at essentially the most. Sometimes you must invest in proving your skills, and this is a kind of moments. It seems the Kernel group may lastly take this security lesson to coronary heart and embrace it, as stated in the article as a "mindcraft moment". This is an opportunity for builders which will need to work on Linux security. Some will exploit the opportunity and others will thumb their noses at it. In the end these builders that exploit the opportunity will prosper from it. I really feel previous even having to jot down that.Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Link]Perhaps there is a hen and egg problem here, but when in search of out and funding folks to get code upstream, it helps to pick individuals and teams with a historical past of with the ability to get code upstream. It is perfectly reasonable to choose working out of tree, offering the flexibility to develop impressive and critical safety advances unconstrained by upstream requirements. That's work somebody might also wish to fund, if that meets their wants.Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (visitor, #24616) [Link]Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Hyperlink]You make this argument (implying you do analysis and Josh doesn't) after which fail to support it by any cite. It could be way more convincing if you hand over on the Onus probandi rhetorical fallacy and actually cite information. > living proof, it was *them* who suggested that they wouldn't fund out-of-tree work however would consider funding upstreaming work, except when pressed for the details, all i got was silence. For those following along at residence, that is the relevant set of threads: http://lists.coreinfrastructure.org/pipermail/cii-discuss... A fast precis is that they told you your venture was unhealthy as a result of the code was never going upstream. You informed them it was due to kernel developers angle so they need to fund you anyway. They instructed you to submit a grant proposal, you whined more concerning the kernel attitudes and eventually even your apologist advised you that submitting a proposal might be the smartest thing to do. At that point you went silent, not vice versa as you suggest above. > obviously i will not spend time to jot down up a begging proposal simply to be advised that 'no sorry, we don't fund multi-year initiatives at all'. that is something that one must be instructed upfront (or heck, be part of some public rules in order that others will know the rules too). You seem to have a fatally flawed grasp of how public funding works. If you do not inform individuals why you want the cash and the way you may spend it, they're unlikely to disburse. Saying I am good and I do know the issue now hand over the cash doesn't even work for many Teachers who've a stable reputation in the sector; which is why most of them spend >30% of their time writing grant proposals. > as for getting code upstream, how about you test the kernel git logs (minus the stuff that was not properly credited)? jejb@jarvis> git log|grep -i 'Author: pax.*team'|wc -l 1 Stellar, I have to say. And before you light off on those who have misappropriated your credit, please remember that getting code upstream on behalf of reluctant or incapable actors is a hugely beneficial and time consuming skill and one of the explanations teams like Linaro exist and are nicely funded. If more of your stuff does go upstream, it is going to be due to the not inconsiderable efforts of other folks in this area. You now have a enterprise mannequin selling non-upstream security patches to customers. There's nothing unsuitable with that, it is a fairly traditional first stage business mannequin, but it surely does rather rely on patches not being upstream in the primary place, calling into question the earnestness of your attempt to place them there. Now here is some free advice in my discipline, which is aiding corporations align their businesses in open supply: The selling out of tree patch route is all the time an eventual failure, notably with the kernel, because if the functionality is that useful, it will get upstreamed or reinvented in your regardless of, leaving you with nothing to sell. If your marketing strategy B is selling experience, you will have to keep in mind that it will be a tough sell when you've got no out of tree differentiator left and git historical past denies that you simply had anything to do with the in-tree patches. In reality "loopy safety individual" will become a self fulfilling prophecy. The recommendation? it was apparent to everybody else who learn this, however for you, it's do the upstreaming yourself earlier than it gets finished for you. That approach you've gotten a respectable historic declare to Plan B and also you might also have a Plan A promoting a rollup of upstream observe patches built-in and delivered before the distributions get round to it. Even your software to the CII could not be dismissed as a result of your work wasn't going wherever. Your different is to continue enjoying the role of Cassandra and probably suffer her eventual fate.Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Link]> Second, for the doubtlessly viable items this could be a multi-year > full time job. Is the CII keen to fund initiatives at that stage? If not > all of us would find yourself with a lot of unfinished and partially damaged features. please show me the reply to that question. with out a definitive 'sure' there is no point in submitting a proposal as a result of that is the time frame that in my view the job will take and any proposal with that requirement could be shot down immediately and be a waste of my time. and i stand by my declare that such easy primary requirements should be public information. > Stellar, I must say. "Lies, damned lies, and statistics". you understand there's multiple method to get code into the kernel? how about you employ your git-fu to find all of the bugreports/advised fixes that went in attributable to us? as for specifically me, Greg explicitly banned me from future contributions by way of af45f32d25cc1 so it's no wonder i do not ship patches instantly in (and that one commit you discovered that went in despite mentioned ban is definitely a really unhealthy instance as a result of it is usually the one that Linus censored for no good reason and made me determine to never ship security fixes upstream until that practice modifications). > You now have a enterprise model selling non-upstream security patches to clients. now? we've had paid sponsorship for our numerous stable kernel sequence for 7 years. i would not call it a business mannequin though because it hasn't paid anyone's bills. > [...]calling into question the earnestness of your attempt to place them there. i should be lacking something right here however what try? i've by no means in my life tried to submit PaX upstream (for all the explanations mentioned already). the CII mails were exploratory to see how critical that entire organization is about truly securing core infrastructure. in a way i've acquired my answers, there's nothing more to the story. as to your free recommendation, let me reciprocate: complicated problems do not resolve themselves. code solving complex issues would not write itself. folks writing code fixing complex issues are few and much between that you can find out in brief order. such people (domain consultants) don't work without cost with few exceptions like ourselves. biting the hand that feeds you will only finish you up in starvation. PS: since you're so sure about kernel developers' potential to reimplement our code, possibly look at what parallel options i still maintain in PaX regardless of vanilla having a 'totally-not-reinvented-here' implementation and check out to grasp the reason. or just take a look at all the CVEs that affected say vanilla's ASLR however didn't affect mine. PPS: Cassandra never wrote code, i do. criticizing the sorry state of kernel security is a aspect mission when i'm bored or simply waiting for the subsequent kernel to compile (i wish LTO was extra efficient).Posted Nov 8, 2015 2:28 UTC (Solar) by jejb (subscriber, #6654) [Hyperlink]In different phrases, you tried to define their course of for them ... I can not assume why that would not work. > "Lies, damned lies, and statistics". The issue with ad hominem assaults is that they're singularly ineffective towards a transparently factual argument. I posted a one line command anybody may run to get the variety of patches you've authored within the kernel. Why don't you submit an equivalent that provides figures you like more? > i've by no means in my life tried to submit PaX upstream (for all the reasons mentioned already). So the grasp plan is to display your experience by the variety of patches you have not submitted? nice plan, world domination beckons, sorry that one acquired away from you, however I'm sure you won't let it happen once more.Posted Nov 8, 2015 2:56 UTC (Solar) by PaXTeam (guest, #24616) [Hyperlink]what? since when does asking a query define anything? isn't that how we find out what another person thinks? isn't that what *they* have that webform (by no means mind the mailing lists) for as effectively? in other phrases you admit that my question was not really answered . > The problem with ad hominem attacks is that they're singularly ineffective in opposition to a transparently factual argument. you didn't have an argument to start with, that is what i defined within the half you rigorously chose not to quote. i am not right here to defend myself towards your clearly idiotic attempts at proving no matter you're attempting to show, as they say even in kernel circles, code speaks, bullshit walks. you can have a look at mine and determine what i can or can't do (not that you have the information to grasp most of it, thoughts you). that mentioned, there're clearly other more succesful people who have achieved so and decided that my/our work was price one thing else nobody would have been feeding off of it for the previous 15 years and still counting. and as unimaginable as it may appear to you, life does not revolve across the vanilla kernel, not everyone's dying to get their code in there especially when it means to place up with such foolish hostility on lkml that you simply now also demonstrated here (it is ironic how you got here to the protection of josh who specifically requested individuals to not bring that notorious lkml fashion right here. nice job there James.). as for world domination, there're many ways to achieve it and one thing tells me that you are clearly out of your league right here since PaX has already achieved that. you are working such code that implements PaX features as we communicate.Posted Nov 8, 2015 16:52 UTC (Solar) by jejb (subscriber, #6654) [Link]I posted the one line git script giving your authored patches in response to this original request by you (this one, simply in case you've forgotten http://lwn.internet/Articles/663591/): > as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not correctly credited)? I take it, by the way in which you have shifted floor in the previous threads, that you simply want to withdraw that request?Posted Nov 8, 2015 19:31 UTC (Solar) by PaXTeam (visitor, #24616) [Hyperlink]Posted Nov 8, 2015 22:31 UTC (Sun) by pizza (subscriber, #46) [Hyperlink]Please provide one that's not flawed, or much less unsuitable. It's going to take much less time than you've already wasted right here.Posted Nov 8, 2015 22:Forty nine UTC (Sun) by PaXTeam (visitor, #24616) [Hyperlink]anyway, since it is you guys who've a bee in your bonnet, let's check your level of intelligence too. first determine my electronic mail handle and mission title then try to find the commits that say they come from there (it introduced again some reminiscences from 2004 already, how times flies! i am stunned i really managed to perform this a lot with explicitly not trying, imagine if i did :). it is an extremely complex activity so by carrying out it you may prove yourself to be the highest canine here on lwn, no matter that's price ;).Posted Nov 8, 2015 23:25 UTC (Solar) by pizza (subscriber, #46) [Hyperlink]*shrug* Or do not; you are solely sullying your personal status.Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Hyperlink]I wouldn't bothPosted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Hyperlink]Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (visitor, #62367) [Hyperlink]Posted Nov 8, 2015 3:38 UTC (Sun) by PaXTeam (visitor, #24616) [Hyperlink]Posted Nov 12, 2015 13:Forty seven UTC (Thu) by nix (subscriber, #2304) [Hyperlink]Ah. I believed my reminiscence wasn't failing me. Examine to PaXTeam's response to . PaXTeam is not averse to outright lying if it means he will get to appear proper, I see. Possibly PaXTeam's reminiscence is failing, and this apparent contradiction will not be a brazen lie, but on condition that the two posts were made within a day of each other I doubt it. (PaXTeam's complete unwillingness to assume good faith in others deserves some reflection. Sure, I *do* assume he is lying by implication here, and doing so when there's nearly nothing at stake. God alone knows what he's willing to stoop to when one thing *is* at stake. Gosh I ponder why his fixes aren't going upstream very fast.)Posted Nov 12, 2015 14:Eleven UTC (Thu) by PaXTeam (guest, #24616) [Link]> and that one commit you discovered that went in regardless of stated ban additionally somebody's ban does not imply it'll translate into another person's execution of that ban as it's clear from the commit in question. it's considerably unhappy that it takes a safety repair to expose the fallacy of this coverage though. the rest of your pithy advert hominem speaks for itself higher than i ever could ;).Posted Nov 12, 2015 15:58 UTC (Thu) by andreashappe (subscriber, #4810) [Link]Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (visitor, #67268) [Hyperlink]I don't see this message in my mailbox, so presumably it acquired swallowed.Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]You are aware that it's entirely attainable that everyone seems to be incorrect right here , proper? That the kernel maintainers have to focus more on safety, that the article was biased, that you are irresponsible to decry the state of safety, and do nothing to assist, and that your patchsets wouldn't assist that much and are the wrong route for the kernel? That simply because the kernel maintainers aren't 100% right it doesn't suggest you might be?Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (visitor, #5770) [Hyperlink]I think you will have him backwards there. Jon is comparing this to Mindcraft as a result of he thinks that regardless of being unpalatable to quite a lot of the group, the article may actually comprise plenty of truth.Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Hyperlink]Posted Nov 9, 2015 15:13 UTC (Mon) by spender (visitor, #23067) [Link]"There are rumors of darkish forces that drove the article within the hopes of taking Linux down a notch. All of this might well be true" Just as you criticized the article for mentioning Ashley Madison regardless that in the very first sentence of the next paragraph it mentions it did not involve the Linux kernel, you can't give credence to conspiracy theories with out incurring the identical criticism (in other phrases, you cannot play the Glenn Beck "I'm simply asking the questions right here!" whose "questions" gas the conspiracy theories of others). minecraft survival servers Very like mentioning Ashley Madison for example for non-technical readers concerning the prevalence of Linux on the earth, if you're criticizing the mention then should not likening a non-FUD article to a FUD article additionally deserve criticism, particularly given the rosy, self-congratulatory picture you painted of upstream Linux safety? As the PaX Workforce identified in the initial submit, the motivations aren't exhausting to know -- you made no point out in any respect about it being the fifth in an extended-working series following a fairly predictable time trajectory. No, we did not miss the general analogy you were making an attempt to make, we simply do not suppose you can have your cake and eat it too. -BradPosted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Hyperlink]Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Hyperlink]It is gracious of you to not blame your readers. I determine they're a good target: there's that line about these ignorant of historical past being condemned to re-implement Unix -- as your readers are! :-) K3n.Posted Nov 9, 2015 18:43 UTC (Mon) by bojan (subscriber, #14302) [Link]Sadly, I do not understand neither the "security" of us (PaXTeam/spender), nor the mainstream kernel folks by way of their perspective. I confess I've totally no technical capabilities on any of these subjects, but when they all determined to work collectively, as a substitute of getting endless and pointless flame wars and blame recreation exchanges, a lot of the stuff would have been accomplished already. And all of the whereas everybody involved might have made one other massive pile of money on the stuff. They all seem to need to have a better Linux kernel, so I've got no thought what the problem is. It appears that evidently nobody is keen to yield any of their positions even a bit of bit. As a substitute, both sides appear to be bent on trying to insult their approach into forcing the other side to give up. Which, of course, never works - it simply causes extra pushback. Perplexing stuff...Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Hyperlink]Posted Nov 9, 2015 19:44 UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]Take a scientific computational cluster with an "air hole", as an illustration. You'd probably want most of the safety stuff turned off on it to realize maximum performance, as a result of you may belief all users. Now take just a few billion cellphones which may be tough or gradual to patch. You'd most likely need to kill most of the exploit classes there, if those devices can nonetheless run reasonably effectively with most security features turned on. So, it is not both/or. It is in all probability "it depends". However, if the stuff is not there for everyone to compile/use in the vanilla kernel, it will be more difficult to make it part of everyday decisions for distributors and customers.Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Link]How sad. This Dijkstra quote comes to mind immediately: Software engineering, after all, presents itself as another worthy trigger, but that's eyewash: if you fastidiously learn its literature and analyse what its devotees truly do, you'll discover that software engineering has accepted as its charter "How you can program if you can't."Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link]I suppose that truth was too unpleasant to fit into Dijkstra's world view.Posted Nov 7, 2015 10:Fifty two UTC (Sat) by ms (subscriber, #41272) [Link]Indeed. And the attention-grabbing thing to me is that after I reach that time, checks aren't adequate - mannequin checking at a minimum and really proofs are the one method forwards. I am no safety expert, my field is all distributed systems. I understand and have carried out Paxos and that i imagine I can explain how and why it works to anyone. However I am at present performing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No test is ample because there are infinite interleavings of events and my head just could not cope with working on this either at the pc or on paper - I found I couldn't intuitively cause about these things in any respect. So I started defining the properties and needed and step by step proving why each of them holds. Without my notes and proofs I can not even explain to myself, not to mention anyone else, why this thing works. I find this both utterly obvious that this could occur and completely terrifying - the maintenance cost of those algorithms is now an order of magnitude larger.Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Hyperlink]> Indeed. And the attention-grabbing thing to me is that after I attain that point, exams will not be sufficient - model checking at a minimum and really proofs are the one approach forwards. Or are you just using the flawed maths? Hobbyhorse time once more :-) however to quote a fellow Choose developer ... "I typically stroll into a SQL growth store and see that wall - you know, the one with the massive SQL schema that no-one totally understands on it - and wonder how I can simply hold all the schema for a Decide database of the same or higher complexity in my head". However it's easy - by education I'm a Chemist, by interest a Bodily Chemist (and by career an unemployed programmer :-). And when I'm thinking about chemistry, I can ask myself "what is an atom made from" and assume about issues like the strong nuclear power. Next level up, how do atoms stick collectively and make molecules, and suppose about the electroweak force and electron orbitals, and how do chemical reactions happen. Then I believe about molecules stick collectively to make supplies, and think about metals, and/or Van de Waals, and stuff. Point is, you should *layer* stuff, and have a look at things, and say "how can I break up parts off into 'black bins' so at any one level I can assume the opposite ranges 'just work'". For example, with Decide a FILE (desk to you) stores a class - a set of similar objects. One object per Record (row). And, identical as relational, one attribute per Discipline (column). Are you able to map your relational tables to reality so easily? :-) Going again THIRTY years, I remember a story about a man who built little pc crabs, that would fairly happily scuttle around in the surf zone. Because he didn't try to work out how to resolve all the problems without delay - each of his (incredibly puny by immediately's standards - that is the 8080/Z80 era!) processors was set to just process a little bit bit of the problem and there was no central "mind". Nevertheless it labored ... Possibly you must simply write a bunch of small modules to unravel each individual drawback, and let final reply "simply occur". Cheers, WolPosted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (visitor, #60862) [Link]To my understanding, this is precisely what a mathematical abstraction does. For example in Z notation we might construct schemas for the various modifying ("delta") operations on the bottom schema, and then argue about preservation of formal invariants, properties of the end result, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A by way of O (for which they've been already argued). The end result is a set of operations that, executed in arbitrary order, result in a set of properties holding for the end result and outputs. Thus proving the formal design correct (w/ caveat lectors regarding scope, correspondence with its implementation [although that can be confirmed as properly], and browse-solely ["xi"] operations).Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]Trying through the history of computing (and possibly loads of different fields too), you may probably find that people "cannot see the wooden for the bushes" more usually that not. They dive into the element and fully miss the large image. (Drugs, and interest of mine, suffers from that too - I remember any person talking concerning the marketing consultant eager to amputate a gangrenous leg to save someone's life - oblivious to the fact that the patient was dying of most cancers.) Cheers, WolPosted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Hyperlink]https://www.youtube.com/watch?v=VpuVDfSXs-g (LCA 2015 - "Programming Thought of Dangerous") FWIW, I feel that this discuss is very related to why writing secure software program is so arduous.. -Dave.Posted Nov 7, 2015 5:Forty nine UTC (Sat) by kunitz (subscriber, #3965) [Link]While we're spending thousands and thousands at a mess of safety problems, kernel issues aren't on our top-priority listing. Honestly I remember only as soon as having discussing a kernel vulnerability. The results of the analysis has been that all our systems have been operating kernels that were older because the kernel that had the vulnerability. However "patch management" is an actual subject for us. Software program should continue to work if we set up security patches or update to new releases because of the top-of-life policy of a vendor. The income of the company is relying on the IT systems running. So "not breaking consumer space" is a safety feature for us, because a breakage of one part of our a number of ten hundreds of Linux programs will stop the roll-out of the safety replace. Another downside is embedded software or firmware. Today virtually all hardware systems include an working system, often some Linux version, offering a fill network stack embedded to help remote management. Regularly these systems do not survive our obligatory security scan, as a result of vendors nonetheless didn't update the embedded openssl. The true problem is to provide a software stack that can be operated within the hostile atmosphere of the Web sustaining full system integrity for ten years or even longer with none buyer maintenance. The current state of software program engineering would require support for an automated replace course of, however distributors should perceive that their enterprise model must have the ability to finance the assets providing the updates. Total I am optimistic, networked software will not be the primary expertise utilized by mankind causing problems that were addressed later. Steam engine use might lead to boiler explosions however the "engineers" had been ready to reduce this risk considerably over a number of many years.Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Hyperlink]The following is all guess work; I would be keen to know if others have evidence both a method or another on this: The people who learn how to hack into these programs through kernel vulnerabilities know that they skills they've learnt have a market. Thus they do not are inclined to hack to be able to wreak havoc - indeed on the whole where knowledge has been stolen with a view to release and embarrass folks, it _appears_ as if these hacks are through much less complicated vectors. I.e. lesser skilled hackers find there is a complete load of low-hanging fruit which they can get at. They're not being paid ahead of time for the info, so that they flip to extortion instead. They don't cover their tracks, and they can typically be discovered and charged with criminal offences. So if your security meets a certain basic level of proficiency and/or your organization isn't doing anything that places it close to the top of "corporations we might like to embarrass" (I suspect the latter is much more practical at conserving methods "secure" than the previous), then the hackers that get into your system are likely to be skilled, paid, and probably not going to do much harm - they're stealing knowledge for a competitor / state. So that doesn't hassle your backside line - a minimum of not in a manner which your shareholders will bear in mind of. So why fund security?Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (guest, #82661) [Link]On the other hand, some effective mitigation in kernel stage can be very helpful to crush cybercriminal/skiddie's strive. If one among your buyer working a future trading platform exposes some open API to their purchasers, and if the server has some memory corruption bugs will be exploited remotely. Then you know there are recognized attack strategies( such as offset2lib) may also help the attacker make the weaponized exploit a lot easier. Will you explain the failosophy "A bug is bug" to your customer and tell them it might be ok? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp. To the most industrial uses, more safety mitigation inside the software will not price you more price range. You will nonetheless should do the regression take a look at for each improve.Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]Keep in mind that I concentrate on external web-primarily based penetration-checks and that in-home exams (local LAN) will seemingly yield totally different results.Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Hyperlink]I keep reading this headline as "a new Minecraft second", and considering that perhaps they've decided to comply with up the .Net factor by open-sourcing Minecraft. Oh properly. I imply, security is sweet too, I assume.Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_every (subscriber, #28989) [Hyperlink]Posted Nov 8, 2015 10:34 UTC (Solar) by jcm (subscriber, #18262) [Hyperlink]Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]Posted Nov 9, 2015 15:53 UTC (Mon) by neiljerram (subscriber, #12005) [Hyperlink](Oh, and I was additionally still questioning how Minecraft had taught us about Linux performance - so due to the other remark thread that identified the 'd', not 'e'.)Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (visitor, #4654) [Link]I might similar to so as to add that in my view, there is a general downside with the economics of laptop security, which is particularly seen currently. Two problems even perhaps. First, the money spent on pc security is commonly diverted in the direction of the so-known as security "circus": fast, easy solutions that are primarily chosen just with a view to "do something" and get higher press. It took me a long time - possibly a long time - to say that no safety mechanism in any respect is better than a bad mechanism. But now I firmly imagine in this attitude and would moderately take the danger knowingly (supplied that I can save cash/resource for myself) than take a foul method at fixing it (and don't have any money/resource left when i notice I should have performed one thing else). And that i find there are lots of bad or incomplete approaches presently obtainable in the computer safety subject. Those spilling our uncommon money/resources on prepared-made ineffective instruments should get the dangerous press they deserve. And, we actually must enlighten the press on that because it is not really easy to understand the effectivity of protection mechanisms (which, by definition, ought to forestall issues from occurring). Second, and which may be more moderen and more worrying. The movement of money/resource is oriented within the direction of attack tools and vulnerabilities discovery much more than in the path of latest safety mechanisms. This is especially worrying as cyber "protection" initiatives look more and more like the standard idustrial projects geared toward producing weapons or intelligence techniques. Moreover, dangerous ineffective weapons, as a result of they are only working against our very weak present methods; and unhealthy intelligence systems as even fundamental faculty-stage encryption scares them down to ineffective. Nonetheless, all of the ressources are for these grownup teenagers enjoying the white hat hackers with not-so-troublesome programming methods or network monitoring or WWI-level cryptanalysis. And now also for the cyberwarriors and cyberspies that have but to prove their usefulness solely (particularly for peace protection...). Personnally, I'd happily leave them all the hype; but I'll forcefully claim that they haven't any proper in anyway on any of the budget allocation decisions. Solely those working on protection ought to. And yep, it means we should determine the place to place there sources. We've got to say the exclusive lock for ourselves this time. (and I guess the PaXteam could possibly be amongst the first to profit from such a change). Whereas desirous about it, I wouldn't even go away white-hat or cyber-guys any hype in the end. That is extra publicity than they deserve. I crave for the day I'll learn within the newspaper that: "Another of these in poor health advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well known virus program code exploiting a programmer mistake and managed nonetheless to bring one of those unfinished and dangerous high quality packages, X, that we are all obliged to make use of to its knees, annoying millions of regular users with his unfortunate cyber-vandalism. All of the protection consultants unanimously suggest that, once again, the price range of the cyber-command be retargetted, or a minimum of leveled-off, in an effort to convey extra safety engineer positions in the academic domain or civilian trade. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional on this affair."Hmmm - cyber-hooligans - I like the label. Though it doesn't apply properly to the battlefield-oriented variant.Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Hyperlink]The state of 'software security business' is a f-ng catastrophe. Failure of the very best order. There is massive amounts of cash that is going into 'cyber safety', but it's normally spent on authorities compliance and audit efforts. This implies as an alternative of actually putting effort into correcting issues and mitigating future problems, the vast majority of the trouble goes into taking present purposes and making them conform to committee-driven guidelines with the minimal amount of effort and changes. Some degree of regulation and standardization is totally needed, but lay people are clueless and are utterly unable to discern the distinction between any individual who has precious experience versus some firm that has spent tens of millions on slick marketing and 'native advertising' on massive web sites and computer magazines. The people with the money sadly solely have their very own judgment to depend on when buying into 'cyber safety'. > Those spilling our uncommon cash/sources on prepared-made useless instruments should get the dangerous press they deserve. There is no such thing as a such factor as 'our rare money/sources'. You've gotten your money, I've mine. Cash being spent by some company like Redhat is their money. Money being spent by governments is the federal government's money. (you, actually, have way more management in how Walmart spends it is cash then over what your government does with their's) > This is especially worrying as cyber "defense" initiatives look increasingly more like the same old idustrial initiatives aimed toward producing weapons or intelligence techniques. Moreover, dangerous useless weapons, as a result of they're only working in opposition to our very susceptible current programs; and bad intelligence methods as even fundamental faculty-level encryption scares them down to ineffective. Having safe software program with strong encryption mechanisms within the hands of the public runs counter to the pursuits of most major governments. Governments, like every other for-profit group, are primarily taken with self-preservation. Money spent on drone initiatives or banking auditing/oversight regulation compliance is Far more invaluable to them then attempting to assist the general public have a secure mechanism for making phone calls. Especially when these secure mechanisms interfere with knowledge assortment efforts. Unfortunately you/I/us can't depend upon some magical benefactor with deep pockets to sweep in and make Linux better. It is simply not going to occur. Companies like Redhat have been massively beneficial to spending sources to make Linux kernel more succesful.. however they're pushed by a the necessity to turn a profit, which implies they need to cater on to the the sort of requirements established by their customer base. Customers for EL tend to be rather more focused on reducing prices related to administration and software program growth then safety at the low-level OS. Enterprise Linux customers are inclined to rely on physical, human coverage, and community security to guard their 'smooth' interiors from being exposed to external threats.. assuming (rightly) that there is little or no they'll do to really harden their techniques. Actually when the selection comes between safety vs comfort I'm sure that the majority prospects will fortunately defeat or strip out any security mechanisms introduced into Linux. On prime of that when most Enterprise software program is extremely dangerous. A lot so that 10 hours spent on bettering an internet entrance-end will yield more real-world safety benefits then a one thousand hours spent on Linux kernel bugs for many businesses. Even for 'normal' Linux users a safety bug in their Firefox's NAPI flash plugin is much more devastating and poses a massively higher threat then a obscure Linux kernel buffer over flow downside. It's just not really essential for attackers to get 'root' to get access to the necessary data... typically all of which is contained in a single user account. In the end it is as much as people such as you and myself to place the trouble and money into improving Linux security. For each ourselves and other folks.Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Hyperlink]Spilling has at all times been the case, however now, to me and in computer safety, most of the cash appears spilled due to bad religion. And this is mostly your money or mine: either tax-fueled governemental resources or corporate prices which are immediately reimputed on the costs of goods/software program we're told we're *obliged* to buy. (Take a look at corporate firewalls, home alarms or antivirus software advertising and marketing discourse.) I believe it is time to point out that there are several "malicious malefactors" around and that there is a real have to identify and sanction them and confiscate the assets they've one way or the other managed to monopolize. And i do *not* suppose Linus is amongst such culprits by the best way. However I feel he could also be among those hiding their heads within the sand in regards to the aforementioned evil actors, whereas he probably has extra leverage to counteract them or oblige them to reveal themselves than many people. I find that to be of brown-paper-bag stage (although head-in-the-sand is one way or the other a new interpretation). In the long run, I feel you're proper to say that at the moment it is only up to us individuals to strive honestly to do something to enhance Linux or laptop safety. However I still suppose that I'm right to say that this isn't normal; particularly while some very severe people get very severe salaries to distribute randomly some troublesome to guage budgets. [1] A paradoxical situation once you think about it: in a site where you are firstly preoccupied by malicious people everybody ought to have factual, transparent and trustworthy behavior as the first priority in their thoughts.Posted Nov 9, 2015 15:Forty seven UTC (Mon) by MarcB (subscriber, #101804) [Link]It even has a pleasant, seven line Fundamental-pseudo-code that describes the current scenario and clearly reveals that we are caught in an countless loop. It doesn't answer the big query, though: How to jot down better software. The sad thing is, that this is from 2005 and all the things that have been clearly silly ideas 10 years in the past have proliferated much more.Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (visitor, #4654) [Hyperlink]Word IMHO, we must always investigate additional why these dumb issues proliferate and get a lot help. If it's solely human psychology, properly, let's fight it: e.g. Mozilla has proven us that they can do great issues given the best message. If we are dealing with active folks exploiting public credulity: let's determine and struggle them. But, more importantly, let's capitalize on this information and safe *our* programs, to exhibit at a minimal (and extra later on after all). Your reference conclusion is very good to me. "problem [...] the standard wisdom and the established order": that job I would fortunately accept.Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Link]That rant is itself a bunch of "empty calories". The converse to the objects it rants about, which it's suggesting at some stage, would be as bad or worse, and indicative of the worst sort of security thinking that has put a lot of people off. Alternatively, it's only a rant that gives little of value. Personally, I think there's no magic bullet. Safety is and always has been, in human history, an arms race between defenders and attackers, and one that is inherently a commerce-off between usability, dangers and costs. If there are errors being made, it is that we should always probably spend more sources on defences that could block whole lessons of assaults. E.g., why is the GRSec kernel hardening stuff so arduous to use to regular distros (e.g. there isn't any reliable source of a GRSec kernel for Fedora or RHEL, is there?). Why does the whole Linux kernel run in one security context? Why are we nonetheless writing a lot of software program in C/C++, typically with none basic security-checking abstractions (e.g. fundamental bounds-checking layers in between I/O and parsing layers, say)? Can hardware do more to offer safety with pace? Little doubt there are plenty of people engaged on "block classes of assaults" stuff, the query is, why aren't there extra assets directed there?Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Link]>There are a whole lot of explanation why Linux lags behind in defensive safety technologies, but considered one of the important thing ones is that the businesses getting cash on Linux have not prioritized the development and integration of those applied sciences. This seems like a purpose which is absolutely worth exploring. Why is it so? I think it is not apparent why this does not get some extra attention. Is it possible that the people with the money are proper to not more extremely prioritise this? Afterall, what curiosity do they have in an unsecure, exploitable kernel? Where there is widespread cause, linux growth will get resourced. It's been this manner for a few years. If filesystems qualify for common interest, surely safety does. So there does not seem to be any obvious motive why this problem doesn't get more mainstream consideration, except that it actually already will get enough. Chances are you'll say that disaster has not struck yet, that the iceberg has not been hit. However it seems to be that the linux development course of just isn't overly reactive elsewhere.Posted Nov 10, 2015 15:Fifty three UTC (Tue) by raven667 (subscriber, #5198) [Hyperlink]That is an attention-grabbing query, actually that's what they really imagine regardless of what they publicly say about their dedication to security technologies. What's the actually demonstrated downside for Kernel developers and the organizations that pay them, so far as I can tell there isn't adequate consequence for the lack of Security to drive more funding, so we are left begging and cajoling unconvincingly.Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (visitor, #4654) [Hyperlink]The important thing subject with this domain is it relates to malicious faults. So, when penalties manifest themselves, it is just too late to act. And if the present dedication to an absence of voluntary strategy persists, we're going to oscillate between phases of relaxed inconscience and anxious paranoia. Admittedly, kernel developpers appear fairly resistant to paranoia. That is a good thing. But I am ready for the days where armed land-drones patrol US streets in the neighborhood of their kids faculties for them to discover the feeling. They don't seem to be so distants the days when innocent lives will unconsciouly depend on the security of (linux-based) pc methods; under water, that's already the case if I remember appropriately my final dive, as well as in several recent vehicles according to some experiences.Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Hyperlink]Traditional internet hosting firms that use Linux as an exposed entrance-finish system are retreating from improvement while HPC, cell and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel in their instructions. This is basically not that surprising: For hosting wants the kernel has been "completed" for fairly a while now. Moreover support for current hardware there isn't a lot use for newer kernels. Linux 3.2, or even older, works just positive. Hosting doesn't want scalability to tons of or thousands of CPU cores (one uses commodity hardware), complicated instrumentation like perf or tracing (systems are locked down as a lot as doable) or superior energy-management (if the system does not have fixed excessive load, it isn't making enough money). So why should internet hosting firms nonetheless make sturdy investments in kernel development? Even if that they had something to contribute, the hurdles for contribution have change into greater and higher. For his or her safety wants, internet hosting companies already use Grsecurity. I have no numbers, however some expertise means that Grsecurity is principally a fixed requirement for shared hosting. Then again, kernel safety is nearly irrelevant on nodes of an excellent pc or on a system operating large business databases which are wrapped in layers of middle-ware. And cell vendors simply don't care.Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Link]LinkingPosted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Hyperlink]Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Hyperlink]The assembled doubtless recall that in August 2011, kernel.org was root compromised. I am certain the system's laborious drives had been despatched off for forensic examination, and we've all been waiting patiently for the answer to a very powerful query: What was the compromise vector? From shortly after the compromise was discovered on August 28, 2011, proper by April 1st, 2013, kernel.org included this observe at the top of the site Information: 'Thanks to all on your patience and understanding during our outage and please bear with us as we convey up the completely different kernel.org techniques over the following few weeks. We might be writing up a report on the incident in the future.' (Emphasis added.) That comment was eliminated (along with the remainder of the positioning News) during a May 2013 edit, and there hasn't been -- to my knowledge -- a peep about any report on the incident since then. This has been disappointing. When the Debian Mission discovered sudden compromise of several of its servers in 2007, Wichert Akkerman wrote and posted a wonderful public report on exactly what happened. Likewise, the Apache Foundation likewise did the precise thing with good public autopsies of the 2010 Web site breaches. Arstechnica's Dan Goodin was nonetheless making an attempt to comply with up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years ago. He wrote: Linux developer and maintainer Greg Kroah-Hartman informed Ars that the investigation has yet to be completed and gave no timetable for when a report may be launched. [...] Kroah-Hartman additionally instructed Ars kernel.org systems were rebuilt from scratch following the attack. Officials have developed new instruments and procedures since then, but he declined to say what they are. "There will be a report later this 12 months about site [sic] has been engineered, but do not quote me on when it will be launched as I'm not responsible for it," he wrote. Who's responsible, then? Is anybody? Anyone? Bueller? Or is it a state secret, or what? Two years since Greg Ok-H said there can be a report 'later this yr', and four years for the reason that meltdown, nothing yet. How about some info? Rick Moen [email protected] Nov 12, 2015 14:19 UTC (Thu) by ortalo (guest, #4654) [Link]Less significantly, note that if even the Linux mafia does not know, it have to be the venusians; they are notoriously stealth of their invasions.Posted Nov 14, 2015 12:46 UTC (Sat) by error27 (subscriber, #8346) [Hyperlink]I know the kernel.org admins have given talks about some of the new protections that have been put into place. There are not any extra shell logins, instead every part uses gitolite. The completely different services are on totally different hosts. There are extra kernel.org staff now. People are using two issue identification. Some other stuff. Do a seek for Konstantin Ryabitsev.Posted Nov 14, 2015 15:58 UTC (Sat) by rickmoen (subscriber, #6943) [Hyperlink]I beg your pardon if I was somehow unclear: That was said to have been the trail of entry to the machine (and that i can readily believe that, because it was also the precise path to entry into shells.sourceforge.internet, a few years prior, around 2002, and into many other shared Web hosts for a few years). However that is not what is of main curiosity, and is not what the forensic study long promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator in the August 2011 Dan Goodin article you cited: 'How they managed to exploit that to root entry is at present unknown and is being investigated'. Ok, of us, you have now had four years of investigation. What was the trail of escalation to root? (Additionally, different details that may logically be coated by a forensic examine, reminiscent of: Whose key was stolen? Who stole the key?) This is the type of autopsy was promised prominently on the front page of kernel.org, to reporters, and elsewhere for a very long time (and then summarily eliminated as a promise from the entrance page of kernel.org, without remark, together with the rest of the positioning Information part, and apparently dropped). It still would be acceptable to know and share that data. Especially the datum of whether or not the trail to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen [email protected] Nov 22, 2015 12:42 UTC (Solar) by rickmoen (subscriber, #6943) [Hyperlink]I've executed a better review of revelations that got here out quickly after the break-in, and think I've discovered the reply, through a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell customers (two days earlier than the public was informed), plus Aug. Thirty first comments to The Register's Dan Goodin by 'two security researchers who have been briefed on the breach': Root escalation was by way of exploit of a Linux kernel security gap: Per the two security researchers, it was one each extraordinarily embarrassing (wide-open entry to /dev/mem contents including the running kernel's image in RAM, in 2.6 kernels of that day) and recognized-exploitable for the prior six years by canned 'sploits, one in every of which (Phalanx) was run by some script kiddie after entry utilizing stolen dev credentials. Other tidbits: - Site admins left the basis-compromised Web servers working with all services nonetheless lit up, for a number of days. - Site admins and Linux Basis sat on the knowledge and failed to tell the public for those self same a number of days. - Site admins and Linux Basis have never revealed whether trojaned Linux supply tarballs had been posted within the http/ftp tree for the 19+ days before they took the location down. (Yes, git checkout was tremendous, but what concerning the hundreds of tarball downloads?) - After promising a report for a number of years after which quietly removing that promise from the entrance web page of kernel.org, Linux Basis now stonewalls press queries.I posted my best try at reconstructing the story, absent an actual report from insiders, to SVLUG's principal mailing record yesterday. (Necessarily, there are surmises. If the folks with the details were extra forthcoming, we'd know what occurred for certain.) I do need to wonder: If there's another embarrassing screwup, will we even be told about it in any respect? Rick Moen [email protected] Nov 22, 2015 14:25 UTC (Sun) by spender (visitor, #23067) [Link]Also, it's preferable to make use of stay reminiscence acquisition prior to powering off the system, otherwise you lose out on memory-resident artifacts that you could carry out forensics on. -BradHow concerning the lengthy overdue autopsy on the August 2011 kernel.org compromise?Posted Nov 22, 2015 16:28 UTC (Solar) by rickmoen (subscriber, #6943) [Link]Thanks to your feedback, Brad. I would been relying on Dan Goodin's claim of Phalanx being what was used to achieve root, within the bit where he cited 'two safety researchers who had been briefed on the breach' to that effect. Goodin additionally elaborated: 'Fellow security researcher Dan Rosenberg said he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the first time I've heard of a rootkit being claimed to be bundled with an assault software, and i famous that oddity in my posting to SVLUG. That having been stated, yeah, the Phalanx README would not particularly declare this, so then maybe Goodin and his a number of 'safety researcher' sources blew that element, and no one however kernel.org insider