OMB and the White House have announced the four finalists for this year's Securing Americans Value and Efficiency (SAVE) awards. Award nominations, which promote ideas to improve government efficiency and save taxpayers' money, are submitted by Federal employees. Anyone can vote for the award winner. To cast your vote, click here. Award voting closes at noon on Friday, December 21, 2012.
TCG also has a Big Hairy Audacious Goal (BHAG) that is similar to the SAVE Awards: we aim to save U.S. taxpayers one billion dollars by 2016! Click here to see some of the ways that TCG has already helped the government save more than $265M.
Posted by Julius Ermis on December 19, 2012 at 12:19 in CMMI and Process Maturity, Collaboration and Transparency, Government Technology, Saving the Taxpayer Money, Telecommuters and telecommuting, We are collaborative, We are fair, honest, and open, We are intelligent, We invest ourselves in the project, We prove our value, We value our families | Permalink | Comments (0) | TrackBack (0)
The National Institutes of Health (NIH) Technology Assessment and Acquisition Center (NITAAC) has awarded TCG a 10-year contract with a ceiling of $20 billion to provide health IT and other IT-related support services under its Chief Information Officer-Solutions & Partners 3 Small Business (CIO-SP3 SB) multiple award, indefinite-delivery, indefinite quantity, government-wide acquisition contract. TCG was one of 79 small businesses awarded the coveted contract. For more information about TCG’s CIO-SP3 SB contract go to: http://www.tcg.com/ciosp3. To read the full press release, click here.
Posted by Julius Ermis on July 30, 2012 at 12:40 in Budget and Performance Management, CMMI and Process Maturity, Collaboration and Transparency, Government Technology, Grants Management, Saving the Taxpayer Money, Science Research IT, Technology, We invest ourselves in the project, We prove our value | Permalink | Comments (0) | TrackBack (0)
When the cat is away, the mice will play.
Earlier this month, TCG was re-confirmed as a CMMI Level 2 organization. Many people look at the CMMI documentation and think that it is a lot of overhead, but that's not the way I see it. To me, CMMI is like keeping a cat in the kitchen.
I'm not in any way saying that programmers, designers, testers, or managers are lazy mice that only work when the cat is away. I'm lucky enough to work with people who are smart, motivated and want to do a great job. In fact, everyone I work with not only wants to do a great job, but they know exactly what they need to do to make a project succeed in the long term.
It's the "long term" bit that sometimes becomes a problem. Every project has deadlines and every team gets in a crunch now and again. What does a good development team do that situation? The best they can. But when you are down in the weeds you are lucky to see the upcoming mile stone. You surely aren't thinking about the 5th or 6th milestone down the lane. Even a good team will sometimes deviate a bit from what they know are best practices if it means satisfying a client today. You can always pay it back later, right? Right? Well...
My experience says that when things get tight, taking short cuts to meet today's goal just messes things up worse for tomorrow. Skipping a step you know to be good for the long term health of a project won't be good for the long term health of the project. Sure, that last statement seems obvious, but we've all been in meetings where someone says "well, since the deadline is this week I think we better just..."
So, back to the proverbial cat: Having a defined and documented process that spells out the best process for your team is great. Having someone check up on folks to make sure they are following the process is even better. That's the cat right there. You will make the hard choice and do what is right because now you have to. You can't sweep it under the rug.
If you have stuck with me this far, you are surely thinking of that one time where you would have lost a $10 million project if you had to stick to the letter of every policy. I agree, there are times when you have to bend a little to keep from snapping in half. A good CMMI compatible policy will have enough flexibility to get you out of a jam. It will typically involve presenting a good case for a waiver and getting sign off from "the management." Not usually too hard if the alternative is wasting time/money or crashing a key project.
So if you are looking at CMMI, don't see it as a busy work generator or a dusty binder of stuffy rules. It's not either of those things. CMMI is a framework that let's you define the process that is right for your team and then help you follow it. If anyone isn't following it, the cat will catch them sooner or later and fix the problem.
How many times a day do you estimate the time and effort required to complete a project? I bet it is a lot more than you think. Did you recently adjust your alarm clock? Before you did that I expect that you, perhaps subconsciously, did the following:
All these things get tallied up in your head and you set your bedside alarm clock for 6:45.
There is a lot of thinking and planning that has gone into this but you don't have to do it every single time you touch your clock. Years of experience have gone into the estimation process. You may have driven to the Parkway Office 5 days a week for the last 3 years. You know that if you leave at 8:05, you will get there just before 8:30. An adult has showered, breakfasted, brushed, and dressed thousands of times. You know about how long it normally takes and where you can save time if you are running behind schedule. All these things make you the expert at estimating how long it takes to go from asleep to the office on a normal Wednesday. By 8:30, or usually earlier, you will know with absolute certainty if you made the right decision setting the alarm for 6:45.
Software engineers aren't so lucky. They may make only a few estimates a month so it takes a whole career to get good at it, if they ever do. In addition, an engineer is often asked to estimate a project that takes thousands of hours, or at the extreme, projects that take hundreds of people a couple years to complete. Your alarm clock project had a staff of 1 and an estimated duration of 1 hour 45 minutes.
When a project is over, software engineers very rarely have enough data gathered to know if they were right or not. Sure, you know if the project shipped on the target date, but the project that ships is never exactly the one estimated. Customers can ask for requirement changes, they may drop some components to get the project done before a competitor ships, a team member may find an affordable off the shelf component that eliminates 1000 man hours budgeted into the estimate. In the end, with so many changes it's hard to tell if the estimate was any good or not.
That's for the big projects. For small bits, you actually can get good at estimating software development tasks. At least sort of good. To find out if you are good though, you need to collect metrics. I'm a big fan of metrics personally. Lord Kelvin was too. He said "If you can not measure it, you can not improve it." How else can you tell if the changes you make are helping or hurting? There are lots of things that feel right, but when you measure them you find out that things weren't going quite as well as you thought. Lots of people see this phenomenon in their financial budget. Buying coffee at Starbucks every day feels like a very good idea and can't possibly cost very much. Then you add up the numbers and find you are paying $1560 a year for Starbucks coffee. Armed with that number you can make an informed decision and improve your use of money. Maybe a you would be happier drinking 7-Eleven coffee and taking a 5 day cruise to Nova Scotia next June. That's a personal decision only you can make, but without the hard numbers it is kind of difficult to realize the two choices are monetarily equivalent.
The NITRC project that I work on does in fact gather metrics on our small scale estimation. During the planning stage of an iteration, every task is broken down into a small piece we call a Feature Request, or FR for short. Each FR is then given an estimated level of effort, usually somewhere between 8 and 64 hours. When an FR is completed, the engineer who worked on it will record the actual number of hours used. Periodically, our project manager will tally up the estimates and actuals to see how close our estimates really are. So far, this looks like a great system for a metric-oriented engineer.
Here is where things got surprising for me: Our estimates tended to be too conservative. Meaning if we estimated 200 hours for a number of FRs, the actuals would consistently come in under 200 hours. Of course, that is good. It's better to be under than over, at least most of the time. I know the guy who does the estimates pretty well, and I know he is good at his job. I was surprised that given the great feedback he has, he hasn't been able to adjust his estimates and get them closer than the actual percentage difference we were consistently seeing.
The nuts and bolts of our metric gathering is a little unusual though and it got me thinking that there may be something fishy going on with the setup. Here is how it works: For the estimates, we categorize things as
Not a lot of granularity, but most of our FRs fall in the 8/16/32 hour range so generally you can find an estimate that you feel good about. For the Actuals, we also use the same category scheme so when you complete a FR, you round your hours to the nearest entry and record it with a drop down list. Except for the actuals, there is also a list entry for 2 hours and 4 hours for the really quick tasks.
After plugging in a few of my actuals, I realized that I could go over the estimate quite a bit without a penalty. If an FR has an estimate of 16 hours and I go 5 hours over, it still is closer to 16 than to the next step up. In that case, I get to mark the actual effort as right on the estimate. But if I go 5 hours under, then the actual level of effort rounds down to the 8 hour mark. Everyone cheers because the task is done early. Everyone except the poor technical lead who is left scratching his head saying, "why am I always budgeting more hours than we need?"
To make sure my intuition was right, I whipped up a quick Python script to check this out. Sure, I know that a decent statistician could prove using the power of Math, but I'm an engineer so I run a simulation instead. Here is a graph of the results:
The vertical axis is the percent below the estimate of the simulate batch of FRs. Keep in mind that the actual time simulated in total is pretty much right on the estimate. Because the reported acutals have to fall in specific buckets, there is a gap between what is reported and what really happened.
The horizontal axis is the value of the deviation I used for my simulated engineers. Basically, what it represents is how far off the estimates they would go. At a deviation of 0, every task will be completed exactly in the estimated time. With a deviation of .1, most of the tasks will be completed within 10% of the estimate. (I used a normal distribution so that 2/3 of the tasks would be completed within 10% of the estimate.) Some will be under and some over, but the average will still be right on the estimate. The higher the deviation, the more randomness I am throwing into the system. The estimator is still getting things right on average, but as the deviation grows, there are more outliers that take significantly longer or shorter than the time estimated.
I think this graph does confirm my intuition. Even though the estimates are right on the money, the reported actuals will tend to be below the estimates because of the quirks in the reporting system. The effect isn't that big though. Even at a really high standard deviation of 50%, the quirks in the system are only accounting for a 7% difference. That's not really enough to back up my original hypothesis. Sure it's contributing to the difference between estimates and reality, but not by too much.
The system has an interesting mathematical quirk, but it probably doesn't have a huge effect on real life. I'll have to look at some other metrics to see what else can be done to improve our process. That's the way science works. Make an educated guess. Test it out. Right or wrong, you still learn something. Lord Kelvin would be happy.
TCG achieved yet another milestone this week: every single one of our project managers is now Project Management Professional (PMP) certified by the Project Management Institute. Nina Preuss crossed the line (in a good way) -- good job, Nina! You can read about it here.
At TCG, we are committed to the best practices captured in the PMI's Project Management Body of Knowledge (PMBOK) and our project managers have either been PMP certified or on the road to it. With Nina's achievement, we now have 100% coverage.
We just posted this news at TCG.com regarding some Lean Six Sigma efforts ongoing at DOJ COPS.
We just posted a new job opening to the TCG web site, here, for a Business Consultant/Architect. If you've got system analysis, business analysis, and architecture experience, take a look!
GCN has an interview with Malcolm Fry, who's considered the "father" of the IT Infrastructure Library (ITIL). Fry says that "ITIL is, to some degree, enterprise architecture". But as Wikipedia accurately describes, ITIL is a set of service management best practices. It's got nothing to do with describing the performance, business, services, technology, and data structures that support an organization. It's everything to do with process and nothing to do with architecture.
I will certainly concede that the tools that support ITIL (change management databases, incident reporting systems, etc.) realize some elements of an EA but they aren't the EA itself.
There's enough confusion about what EA is without this kind of muddying-the-waters. I'm by no means an EA expert but I know enough to see that claiming "ITIL is EA" is complete hooey. It's like saying that CMMI is EA -- total nonsense.
I hope to investigate ITIL in a bit more depth in coming weeks. It originated in the UK and I suspect it can be well applied here in the US to fill in the gaps between the PMBOK (which describes how to run an IT project organization) and CMMI process areas (which describe how to conduct systems integration work). Has anyone got experience in mapping these against each other?
TCG's Senior CMMI Level 2 Mentor, Maureen Sullivan, just published a great article on www.tcg.com: What is CMMI Level 2 ... and How Do We Get There?: A Brief Guide to CMMI Level 2 for Small Software Development Organizations. The article is intended to provide a base level of information about CMMI by answering seven common questions that Maureen's received while delivering our CMMI Mentoring Service. If you have more questions about CMMI or process maturity, send them our way.