Hacker News Comments on
Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations
Hacker News Stories and CommentsAll the comments and stories posted to Hacker News that reference this book.
> we need to figure out another way of working that isn’t agile,
Isn't devops the next generation? (I mean devops as promulgated by Accelerate: https://www.amazon.com/Accelerate-Software-Performing-Techno... , not 'smoosh the devs and ops teams together, add Jenkins and hope for the best'.)
⬐ madroxI'm not convinced it's materially different. At scale it looks similar to large agile organizations of today.
And you'd be wrong.
First, I am not aware of any operating systems that are developed using agile methods, and second the empirical evidence is in that agile improves quality, so it would be better if they were.
⬐ grumpyproleIt was joke. Your position looks rather like the true Scottmans fallacy to me.⬐ mpweiher1. The fallacy is called the "No true Scotsman". The "No" is rather important. :-)
2. And no, my position has nothing whatsoever to do with the "no true scotsman". I am not saying that they couldn't possibly be using agile because they have bad quality. I am saying that I am not aware of any of them using agile, and I was closely associated with at least one of them for a while.
And I also separately cite the evidence that has now been produced that agile improves quality, and conclude that if there are quality problems as you claim, a more agile approach would likely help, at least according to the empirical evidence.
¯\_(ツ)_/¯⬐ grumpyproleGiven your rather patronising replies, I suspect the original joke was lost on you. The issue is not Agile per se, it is organisations assuming that "Agile" will solve all their problems, where "Agile" usually means vendor products, forced rituals and tick boxes. Yet more Agile evangelism does not address this problem.
What organisations need more than anything is open minded individuals that are not slaves to established dogma. The Agile manifesto actually says that all methods should be constantly re-evaluated to see if they fit the particular organisation/team. Your empirical "evidence" likely comes with a lot of context and is unlikely to be applicable to all organisations and/or domains.⬐ CRConradAnd all that was supposed to be read into
> Is there any operating system left now that isn't riddled with bugs? I blame Agile.
I think not.
If anything, that deserved even more patronizing replies.⬐ grumpyprole> And all that was supposed to be read into.
Of course not, I am however attempting to explain in detail why such sacrilegious "blame Agile" comments might be made.
> If anything, that deserved even more patronizing replies.
I am highly suspicious of anyone who is so certain of their position, we should all be riven by doubt!
Yes, we're in agreement. A coworker of mine pointed me to the book Accelerate , which I admittedly still haven't read, which also mentions that organizations where management tries to make sure engineers are at 100% utilization end up being dysfunctional, partly for the reasons you pointed out.
⬐ stjohnswartsIt's one of those things that works for a quarter or two, but then you start making your engineers hate their job and productivity drops quickly when morale is suffering.
I prefer studies over anecdotes and found this: https://www.amazon.com/Accelerate-Software-Performing-Techno...
According to this study, you can measure a team's progress and ability with:
* number of production deployments per day (want higher numbers)
* average time to convert a request into a deployed solution (want lower numbers)
* average time to restore broken service (want lower numbers)
* average number of bugfixes per deployment (want lower numbers)
I am curious about other studies in this area and if there is overlapping or conflicting information.
⬐ flipchartHaven't read the linked study, but it seems like all of those metrics (except time to restore broken service) can be very easily gamed - thinking about the recent SO blog post https://news.ycombinator.com/item?id=25373353⬐ rvdmeiWhy would you want to game metrics that honestly can help improve your business outcome? The study is worth reading and to put it simply: the metrics seem to make sense anyway and if science proves that they do, that’s even better.⬐ logifail> Why would you want to game metrics that honestly can help improve your business outcome?
Q: If your manager imposed OKRs/KPIs on your team, especially if there were a financial incentive linked to achieving "results"/"performance indicators", how would you feel?
Note that what is good for the business may or may not have much to do - at least in the short to medium term - with what is good for the employees.⬐ rvdmei⬐ wschroedDefinitely a reason to try and game the metrics (and probably easy too). In my experience the metrics make sense when you use them to improve as a team, I would actually try to avoid using them as OKR or KPI.Though I posted the study, I do want to see more work in that area to repeat the observed effects or to refine the KPIs. For example, I am personally skeptical of the notion of "number of defects per release" and want to see an exploration of "amount of time spent on defects per release".
Science doesn't tell us anything (that would be authority or religion), and it doesn't prove things. It is a process that can assist us in logically determining what is NOT true about a cause-effect relationship to the point where we can make more accurate and practical predictions about those causes and effects. More experiments refine the current beliefs.
So...shouldn't the focus be fixing the fact that release cycles take several weeks?
This is well researched and discussed in Nicole Forsgren's book: Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations
⬐ brandmeyerSometimes long test cycles are just the nature of the business, especially in hardware-software co-development, and especially in high-reliability development. Nobody will believe that running software tests on a workstation or server somewhere counts for release readiness if it gets deployed to a wind turbine, robot, or megawatt UPS.
The software team's automated tests indicate that the software is ready to start an expensive physical testing cycle, not that its ready to go to the field.
If we can all agree that "shipping" is a feature and that "shipping faster" is a competitive advantage, then "shipping fast" is probably one of the first features we should invest in as a team, from leadership to private chef.
If it takes 10 people and six hours to accomplish the release of even the smallest of features, then it sounds like there's a bug with your "ship fast" feature and the team should invest some or all of its resources in fixing that. You probably can't do it over night, but you gotta chip away at it over time at the very least.
If you are looking for justification backed up by real world numbers, I highly recommend the book "Accelerate" by Nicole Forsgren, et.al.
⬐ beatYeah, I know. And I've read the book. But I work in enterprise devops. This is the reality of many, many teams. Which means there's a lot of unnecessary process that needs to get tossed and a lot of automation that needs to be built in order to do short iterations and get out of the 3-6 month window.⬐ numbsafariMe too! I feel your pain.
Have you considered giving it as a gift to your peers and leadership? I really think it's a great read and resource for anyone trying to sell organizational change.
I wonder if any of those "automation needs" could be turned into startup ideas?⬐ beatDone that, too.
I actually tried a startup in the monitoring space, but sadly failed (it was great fun losing $100k or so, tho... well worth the experience!). But the automation needs problem is more a consultancy thing than a product thing. There are lots of products. Teams buy them, with the best of intentions. Lots of management wants to believe they can buy their way out of the hard problems of not having any discipline. Sigh.