HN Books @HNBooksMonth

The best books of Hacker News.

Hacker News Comments on
Rapid Development: Taming Wild Software Schedules

Steve McConnell · 14 HN comments
HN Books has aggregated all Hacker News stories and comments that mention "Rapid Development: Taming Wild Software Schedules" by Steve McConnell.
View on Amazon [↗]
HN Books may receive an affiliate commission when you make purchases on sites after clicking through links on this page.
Amazon Summary
Corporate and commercial software-development teams all want solutions for one important problem―how to get their high-pressure development schedules under control. In RAPID DEVELOPMENT, author Steve McConnell addresses that concern head-on with overall strategies, specific best practices, and valuable tips that help shrink and control development schedules and keep projects moving. Inside, you’ll find: A rapid-development strategy that can be applied to any project and the best practices to make that strategy work Candid discussions of great and not-so-great rapid-development practices―estimation, prototyping, forced overtime, motivation, teamwork, rapid-development languages, risk management, and many others A list of classic mistakes to avoid for rapid-development projects, including creeping requirements, shortchanged quality, and silver-bullet syndrome Case studies that vividly illustrate what can go wrong, what can go right, and how to tell which direction your project is going RAPID DEVELOPMENT is the real-world guide to more efficient applications development.
HN Books Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this book.
Read

https://www.amazon.com/Software-Estimation-Demystifying-Deve...

and

https://www.amazon.com/Rapid-Development-Taming-Software-Sch...

Also think: if the business does project A, how much money does that make for us or save for us? From a developer standpoint I have gotten the business people to tell me a value that is a fraction of the cost of the product. You fold that one and free up resources for project B which is the other way around.

It's not exactly fair.. and also not likely without extensive time spent doing the estimation, like sure - you can be more precise, but the more precise you need to be, the longer it takes you to frame the estimate.

Two books help with methods of doing this.

Rapid Development: Taming Wild Software Schedules: https://www.amazon.com/Rapid-Development-Taming-Software-Sch...

Software Estimation: Demystifying the Black Art https://www.amazon.com/Software-Estimation-Demystifying-Deve...

This likely does mean you'll need to be allowed to deploy automated test cases, develop documentation and the entire nine yards, because it's going to be far harder to hit any reasonable deadline if you don't have a known development pipeline.

You'll need to reduce your uncertainty as much as you can to even have a chance - and even then, things will still blindside you and double (or more) the actual vs. the original estimate.

b0rsuk
We talked to him, and no. We won't be allowed.
fenier
Then take it up with his boss, fight him on it, or look for work elsewhere. It's unreasonable that you'd be adversely graded on something that industry has historically struggled with.
Dec 10, 2018 · defined on JIRA is an antipattern
"Analysis paralysis" is a real thing, so I partly agree with you. Sometimes you have to take parts of a large concept and "dick around" with it to discover the unknown unknowns and how not to do it, then feed that back into analysis.

After a number of iterations of this, you converge on a baseline architecture that you try to support future change without being excessively focused on YAGNI. Over-focus on YAGNI often leads to architectures that are so inflexible that when you DO need it, you can't add it without a demolition crew[1].

People should also remember the context in which YAGNI came up: Kent Beck, Ward Cunningham, Smalltalk, and the C3 project. This particular combination of people, project, and process made YAGNI feasible. And the C3 project, which spawned XP, was not as successful as folklore would have it[2].

It's not always so. Imagine if YAGNI was the focus of Roy Fielding and the HTTP specification, which has lasted remarkably well because of allowance for future change in the architecture.

Scrum and other processes that claim adherence to the Agile Manifesto work best in certain contexts, where code turnaround can be fast, mistakes do not take the company down and fixes can be redeployed quickly and easily, the requirements are not well-understood and change rapidly, and the architecture is not overly complex.

Many other projects don't fit this model, and people seem to think that so-called "Agile" (which even the creators of the Manifesto complain is not a noun), and mostly Scrum, is the sole exemplar of productivity.

The fact is that there are many hybrid development processes, described by Steve McConnell[3] long before the word "agile" became a synonym for "success", that may be more suitable to projects that have different dynamics.

An example of such a project could be one that does have well-defined requirements (e.g. implementing something based on an international standard) and will suffer from a pure "User Story" approach.

Let's be much more flexible about how we decide to develop, and accept that you need to tailor the approach to the nature of the project, based on risk factors, longevity, and other factors, and not on dogma.

And let's not underplay the extreme importance of a well-thought out architecture in all but the most trivial of projects.

[1]: https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it

[2]: https://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compens...

[3]: https://www.amazon.com/Rapid-Development-Taming-Software-Sch...

Read this

https://www.amazon.com/Rapid-Development-Taming-Software-Sch...

for the engineering and peopleware considerations, which is maybe 1/3 of "right".

Another 1/3 of "right" is the path from conceptual design to database modelling and realizing operations on the database from code. If this part is well planned the code almost writes itself and the customer can be always right because the answer to "can you do this small thing?" is always "yes!"

The other 1/3 of "right" is the content of the computer science curriculum. Some of this is practically math such as combinatorics and algorithm analysis. You would also have to take some classes in areas such as compilers, computer architecture, operating systems, etc. For the average person who wants transferrable skills I say go for compiler construction because small simple compilers are userful and methods used in compiler construction are useful for other kinds of programs. Also compilers interact with the processor and operating system so you can learn some of that by learning compilers.

Another path that gets closer to the metal is do some embedded development, for instance, program a microcontroller to talk to the computer in your car. Operating systems for tiny machines are all the rage these days and easy to learn because they are themselves tiny.

The unappreciated key to rapid software development is having a realistic plan:

https://www.amazon.com/Rapid-Development-Taming-Software-Sch...

"Point the boat in the right direction and row" is the dominant paradigm in the industry.

The concept of "technical debt" is I think harmful because it is a two word phrase that stops thought. In the real world if you want to take on commercial debt, the bank and/or bondholders and originators are going to want to see a detailed financial analysis that will indicate they will get their money back.

Probably 80% of effort on software is maintenance, but it is rare for any project to start out thinking about the cost of maintenance.

Jtsummers
As a maintenance programmer, you're 100% correct. Our systems are not designed to be maintained and so our ability to work is severely hampered.

Within the area of maintenance programming, if you inherit a program with a poor testing infrastructure (or even worse, have to produce it yourself) you will make the system worse over time or move at a snail's pace.

Automated testing and unit tests need to be present to make effective changes during maintenance to systems. Otherwise too many of our changes are shots in the dark. They add the feature we wanted (we can test that much), but may have broken a dozen other things that we won't see until much later in the schedule.

A lack of automated testing and good unit tests also motivates managers to adopt more waterfall-styled schedules. Pushing integration of the various change requests to the last possible moment, with a comprehensive test suite run after that. This creates huge schedule risk, but from their perspective is necessary. Why? Because they don't want to spend the time/money running that huge, comprehensive, largely manual test suite more than once in a year. They don't accept that a partial test suite (a representative sample, if you will) could be executed with greater frequency and give similar results, with the full suite run once for certification testing at the end.

PaulHoule
You're definitely right that unit tests are a part of the solution.

https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...

can be read in a few different registers (making a case for what unit tests should be in a greenfield system, why and how to backfit unit tests into a legacy system) but it makes that case pretty strongly. It can seem overwhelming to get unit tests into a legacy system but the reward is large.

I remember working on a system that was absolutely awful but was salvageable because it had unit tests!

Also generally getting control of the build procedure is key to the scheduling issue -- I have seen many new project where a team of people work on something and think all of the parts are good to go, but you find there is another six months of integration work, installer engineering, and other things you need to do ship a product. Automation, documentation, simplification are all bits of the puzzle, but if you want agility, you need to know how to go from source code to a product, and not every team does.

Jtsummers
That book is something I wish I'd read when I started my career. This is a holiday week so all the management types were out, but I had planned to have management buy a copy for the office library next week. I still need to finish it, but most of what I have read, I've been able to apply to projects in the past. I should probably actually reread it because it's been a while.
Just say no.

Ok, it's a little more complex than that. In project management people talk about the "triple constraints" of budget, time, and scope.

The relationship between budget and time is not a simple tradeoff. To some extent you can accelerate a software project by increasing the budget, but that extent is limited, maybe you can accelerate the schedule by 30% relative to a "least cost" plan.

Fred Brooks learned these limitations the hard way in the 1960s and he wrote the Mythical Man Month so you don't have to! One problem is that if you add more people to a project, it takes time and attention to onboard them that could otherwise be used to get the project done.

A counter to that is that sometimes buying hardware, software, or services, can greatly accelerate the project. For instance if you are training neural networks on a MacBook, it is probably worth every penny to get a real desktop PC and put a 1080Ti graphics card in it, or to spend some money on cloud computing.

Many managers look at the cost as a function of the deadline, that is, they see the cost of the project as a function of the time you are tied up doing it, so if they can compress the deadline, the cost goes down. (or so they think).

Thus that leads to the "phony deadline" which has no real basis. One problem is that setting out without a realistic plan you are likely to make mistakes which will draw out the project, add to costs, possibly make the project fail.

Most of what I say above is laid out in more detail here:

https://www.amazon.com/Rapid-Development-Taming-Software-Sch...

The flip side of that is the hard deadline, where the job might as well not be done if it is not done on time, for instance, you need to get a grant application in on before a certain day, or you are putting together a demo you are going to show at a trade show on a particular date.

It is important to understand what the actual nature of deadlines that you are up against, what flexibility you have, what impact being late has on the business, etc.

That leaves "scope" as an area with wriggle room. Probably there are some features of the project can be dropped or modified, and management may be able to hit the deadline by dropping features it can afford to drop. This is one big advantage of "agile" methods; if you have something that sorta-kind works at the 25% mark of the project and then you hit the deadline with the most important 80% part of the functionality you are doing better than most people.

Jun 29, 2017 · PaulHoule on “low hanging fruit”
It is a popular term, so popular that the low hanging branches are thoroughly picked in most markets.

A good phrase would be "least cost development", which is really the idea behind the book "Rapid Development"

https://www.amazon.com/Rapid-Development-Taming-Software-Sch...

The short of it is that working to a realistic plan with the right amount and kind of planning is much more "rapid" than the typical "let's point the boat in the right direction and row for a while" strategy that is rooted in wishful thinking.

That said there often is some simple strategy that gets 70% of the answers right and it is correct to get that into place (at some required scale) and then think about the problem of developing a strategy that another gets 7% of the remaining right, then 3%, etc. It could be very dangerous to start from the other end of implementing strategies that add 1% or 0.1% of gain. That is, "pick the low hanging fruit" but with a plan to pick the rest of it!

My favorite books on project management:

1. "Project Management for the Unofficial Project Manager" is a high-level but pretty complete introduction. It has good good examples from non-technical projects based on the Project Management Institute's infamous "Project Management Body of Knowledge" (PMBOK). https://amzn.com/194163110X

2. Scott Berkun's "Making Things Happen: Mastering Project Management (Theory in Practice)". Scott was a program manager at Microsoft and describes some of the less process-oriented, more "in the trenches" aspects to managing a project. https://amzn.com/0596517718

3. Steve McConnell's "Rapid Development: Taming Wild Software Schedules". It's more of an encyclopedia of software project management and is now a bit dated (1996), pre-dating Scrum and Agile but all those ideas have been known for a long time. https://amzn.com/1556159005

This is a great question! Here is what I think.

Both the ways "data scientists" typically work and the agile methods used in many software development organizations are unsuitable to commercial use of machine learning and other data-rich methods.

In your question I am hearing two themes: (i) how to organize the actual work ("no data", "no features", "no users") and (ii) how to slot the work into the sprint system.

The typical sprint system often introduces risk and uncertainty to data rich projects. Here is an example. I was working on a project where the sprints were typically two weeks, but one part of building the knowledge base was running a batch job that took two days. Of course if you set the batch job up wrong you might have to do it more than once.

When I was doing the batch job I would account for the risk and spend maybe two days getting ready for the batch job and run the batch job at the very beginning of the sprint, then even if things went horribly wrong with the batch job and I had to do it two or three times I was certain the KB would be ready on time. Practically I had a PERT chart in my head that I was using to plan my own work.

Even though I told them what I just told you, the first time some other team members did the batch job, they started it on the last day of the sprint which meant it wasn't ready and the Sprint shipped with an old and inappropriate KB.

As a retrospective it would be good to turn the 2 day batch job into a 2 hour batch job (It started out as a 2 century batch job!) Also the reliability of the batch job is every bit as important as the speed in a situation like that. More fundamentally, I think some thinking about the ordering of work (PERT charts) should have been built into the process.

There are lots of cases there, but note the risk amplifying property of the sprint. If some input to the sprint is a day late, everything that input depends on slips two weeks.

For that project we also did two hour "planning poker" meetings and that was another problem because with two hours we didn't have enough time to make certain decisions. If we'd had two or three people think about things for a day we could have made consistently better decisions about certain things which would mean doing the right work in the next sprint, similarly saving two weeks of calendar time.

It is very easy for little failures of the type described above to cascade and produce a recurring pattern of failure that is awful for productivity, morale, etc.

It is very important to push back on management and address these kinds of problems.

Now this sounds very negative for agile in data-rich projects and that's not the only thing you should take away. In the long run, data rich projects benefit hugely from continuous improvement that is done on a regular cadence.

You meet "data scientists" or "junior programmers" who have started a number of projects and sent deliverables over to other people who get them ready for production. They think they have a great batting average, but when you look it from a wider perspective you see that 4x the man hours they put in the project got spent getting ready to get stuff for production. Had the team "begun with the end in mind", the total cost of the project could be cut in 1/2 or more and the risk greatly reduced.

Big and very capable co's like IBM and Nuance, as well as many smaller ones you have not heard of, have built data-rich systems that turned out to be like building a nuclear reactor. We are not talking something that cost $22,000 when it should have cost $21,000, but rather something that cost $20 billion when it should have cost $5. The people involved will tell you they don't know what they're going to do next but they do know they are never going to do that again.

So your process, technology, everything, has to be designed to control (1) risk and (2) cost to address those things and you've got to communicate that to the people you work with.

What most people don't know/accept/believe is that most teams would control cost best if they tried to control risk first, see:

https://www.amazon.com/Rapid-Development-Taming-Software-Sch...

As for your other issues, this is what I am going to say.

Short term there are two things that really matter: (1) getting data, and (2) developing the basic interfaces between the ML component and the rest of the system. If you have (2) you can really contribute to the sprints, if you don't, you are cannot. Without (1) any data pipeline stuff, featuring engineering, etc. is going to largely be a waste of time.

For data start out with the Enron emails or your own emails and label enough of it that you can start thinking about the other issues. Your early data set will be nowhere near large enough to get useful results, and that's another issue you'll need to bring up with management once you've reached it.

kuro-kuris
Paul, thanks for your great reply. I am going to bring up your suggestions and hopefully a larger focus on risks can help with our deliverables PERT charts seem a bit over bureaucratic for our team but considering our dependencies and the risks more seems great. Improving the connection between risks and features in the mind of the team will definitely help. I still feel really constrained and over-pressured in the current sprint structure, hopefully bringing the risks out will decrease my personal anxiety as well.

With regards to the work flow, we went down our personal email road initially. Though getting data seems extremely cumbersome from the main backend of our service. With regards to the interface between our ML component and the rest of the system currently we are using simple heuristics to fulfill the ML requirements so the flow of information is there in one direction but not in the other one.

Get these two

http://www.amazon.com/Rapid-Development-Taming-Software-Sche...

and

http://www.amazon.com/Software-Estimation-Demystifying-Devel...

If you want to get deeper into project management I suggest that you become a member of the PMI and possibly get certification from them. The training and testing are rigorous and it is a certification that means something both from the knowledge you get and the benefit of having it on your resume.

narrowrail
The OP discusses Product Management, which usually falls in the Marketing Dept. (at most corporations I've been and heard of), while you are suggesting project management and engineering management books. I'm not sure that makes sense; and, being at a startup I'm not sure certification of any kind would be worth the time.
Seeing these classics discussed at all is awesome.

Code Complete was (to me) revolutionary at the time. Though it may seem a bit dated now, it was a foundational work that much of what we take for granted was built on.

I don't hear enough talk about Steve McConnell's other book "Rapid Development". http://www.amazon.com/Rapid-Development-Taming-Software-Sche...

I just looked at its Amazon page and it has 5 stars with 115 reviews. That doesn't surprise me at all - it's well deserved.

It came out well before the agile movement, and I would contend that it laid the foundation for it.

I'm so glad I read that at the beginning of my career. As a junior developer, it helped me immensely and even more so over the years.

There are some great books which may help provide you with some structure on how to manage the team:

Peopleware: Productive Projects & Teams http://www.amazon.com/Peopleware-Productive-Projects-Teams-S...

Delivering Happiness: A Path to Profits, Passion, and Purpose http://www.amazon.com/Delivering-Happiness-Profits-Passion-P...

Rapid Development: Taming Wild Software Schedules http://www.amazon.com/dp/1556159005/?tag=stackoverfl08-20

The Five Dysfunctions of a Team: A Leadership Fable http://www.amazon.com/Five-Dysfunctions-Team-Leadership-Fabl...

Fundamentally, cultivating a healthy team culture is the most important role you can play. Foster open, transparent communication, and avoid falling into the trap of micromanagement. Keep things happy, keep it real. All the best!

eberfreitas
Thank you so much. I'll try to get my hands on these books!
HN Books is an independent project and is not operated by Y Combinator or Amazon.com.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.