tag:blogger.com,1999:blog-22915540788370788522024-03-12T23:05:46.248-05:00On Programming and Applications DevelopmentLessons I learned, and my observations from the worlds of programming, software development, and ITTarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.comBlogger22125tag:blogger.com,1999:blog-2291554078837078852.post-88315061437540854562015-02-04T19:04:00.000-06:002015-02-04T19:04:35.332-06:00A Story of Communication<div dir="ltr" style="text-align: left;" trbidi="on">
We had a pretty good offshore team! Our team members who have been on rotation to the remote site all came back very impressed. The offshore team was talented, knowledgeable, and smart. One had to wonder how come our client didn't think very highly of them!<br />During one sprint, our offshore team had signed up for a story that was prominent on the client's radar. They started by reviewing the story notes, huddling on the technical design, and coding away the solution.<br />Meanwhile, the client team was happily receiving updates during standup, and sending messages of support and encouragement.<br />Soon, the team started getting some feedback from the solution they implemented. The results weren't promising. It seems they were heading to a dead end. The team convened and decided on an alternate approach. Diligently, they started fixing the problems right away.<br />The fix took a longer than anticipated, but the team can now see the end in site. They doubled down their efforts to get the story done.<br />On another continent, the client was getting super nervous. The sprint was drawing to a close, and they still had a story in flight. It would look bad on their reports if we missed this story. They started inquiring whether we think we'll be done on time.<br />Our BA initiated many conversations with the offshore team. Trying to understand what's holding them back. The team explained where they are, and the message went back to the client: we'll get it done on time.<br />The team ended getting the story done in time, but with elevated levels of stress. They felt the client didn't allow them enough breathing room to get their tasks finished, and caused them to waste time and energy on useless meetings, instead of the needed focus on the task at hand.<br />The client, on the other hand, was displaying signs of loss of confidence, and even began to wonder whether the offshore team was incompetent.<br /><br />
<i><b>What went wrong?</b></i><br />You may have concluded while reading this story that:<br />
<ul style="text-align: left;">
<li>The offshore team was not being transparent enough, by not sharing their status earlier with the client.</li>
<li>The client wasn't proactive enough, by asking more probing questions.</li>
<li>The offshore team was just trying to save face: why send a message of being late, while they could, soon enough, they believed, send the message: "we are done."</li>
<li>The client wasn't supportive enough. They should have offered to help sooner.</li>
<li>It's all because of cultural differences.</li>
</ul>
<div style="text-align: left;">
<br />The answer is a little bit of all of the above. A direct consequence of working in a distributed setting is a lower quality of communication. None of us is perfect. We are bound to make mistakes, and something will always get lost in translation.<br /><br /><b><i>What's the way out?</i></b><br />It's easy enough to say "over-communicate," but that doesn't give enough guidance as to what we should do. Here are some concrete advice that may help:</div>
<ul style="text-align: left;">
<li>Build trust with the offshore team. Make sure you are seen as an ally. If the offshore team perceives they have lost your trust, they will be that much less likely to share potentially bad news.</li>
<li>Seek opportunities for communication outside normal work sessions. It's amazing how much work related knowledge you can attain during a non-work setting. You get better context, better understanding of the team dynamics, and learn better ways to communicate fruitfully.</li>
<li>Spend face time with the offshore team. Connections you build on a personal level will help immensely when you're physically present.</li>
<li>At the same time, avoid at all cost attempting to micro-manage remotely. Not only you'll lose trust, you'll also slow the team down, introduce process friction, and become a bottleneck.</li>
</ul>
<div style="text-align: left;">
A better process flow in a distributed setting starts with building awareness of the situation. Start with the mindset that you are already losing information, and build mechanisms to counter.</div>
</div>
Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com1tag:blogger.com,1999:blog-2291554078837078852.post-25050664100108403952014-04-04T18:27:00.000-05:002014-04-04T18:29:38.722-05:00Word Request: Email is Failing to Advance our Understanding<div dir="ltr" style="text-align: left;" trbidi="on">
So, here is the situation: we are having a discussion over email, and the discussion is now exhibiting the following characteristics:<br />
<ul style="text-align: left;">
<li>It is getting nowhere.</li>
<li>Everyone seems determined in their ways.</li>
<li>We are pretty much talking over each other.</li>
<li>It’s as if we are communicating over different wave lengths.</li>
</ul>
<div style="text-align: left;">
We need to recognize the situation, and put an end to this.<br />
<br />
And therein lies my humble request: a word that someone responds with that identifies the situation and calls for continuing the discussion over another medium.<br />
<br />
I’ve seen the pattern above so many times that I believe labeling it will help us all communicate better. Here are some wordy descriptions that may be used:</div>
<ul style="text-align: left;">
<li>Continuing this discussion over email is harmful.</li>
<li>Continuing this discussion over email will cause more harm than good.</li>
<li>This discussion thread has reached a point of negative return.</li>
<li>We have reached a point where we should continue this discussion using a different medium.</li>
<li>Email is not the best way to continue this discussion.</li>
<li>Email is failing to communicate the point.</li>
<li>Email is failing to advance our understanding.</li>
<li>Email fails to advance the subject.</li>
</ul>
<div style="text-align: left;">
Yes, we can make an acronym, but down with acronyms. The world doesn’t need another EFAU, or EFAS. Let’s keep that as a last resort.<br />
<br />
Any ideas?</div>
</div>
Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-78618756902390748212014-03-11T21:07:00.002-05:002014-03-11T21:09:09.851-05:00How to Make Feature Teams Work for You<div dir="ltr" style="text-align: left;" trbidi="on">
Let's say that I'm a CTO, and my IT program management is a mess. I can't allocate budget based on business priorities and company-wide initiatives. So I decide to unify all my IT force into a single pool of resources, and divide them into short-lived feature teams, formed around my current priorities. As my priorities change, I dissolve some teams and form others. And it is like a dream come true: now I can allocate my budget easily, have predictability into my spend, and avoid wasting money into projects that are not aligned with my goals. Or is it?<br />
<br />
Let’s look at this a bit in detail. Feature teams will either work on existing assets, or develop new ones, and perhaps decommission old ones in the process. Now, what will be the state of these assets while feature teams work on delivering to their respective initiatives?<br />
<br />
Consider the following analogy. Let's say that every one of my IT assets, be it an application or a backend service, is an airplane. Some teams are building new airplanes. Other teams are adding capabilities to existing airplanes. Some airplanes are in-flight, delivering business value. Some airplanes require more maintenance than others. But each one has its specialized flying instructions and maintenance procedures. Consider my feature teams to be the flying and maintenance crews. Based on my business priorities, I assign crews to airplanes; building new ones, adding capabilities to existing ones, or decommissioning and replacing some, or part thereof, with others, or parts thereof. All the while keeping all airplanes flying.<br />
<br />
No, wait, who is keeping the airplanes flying? That is not a business imperative of any of the feature teams. An implicit assumption, maybe? Is there a separate maintenance crew to keep all airplanes flying? How are we going to ensure that that feature teams will respect the flying procedure of each airplane?<br />
<br />
And how are we going to ensure that feature teams will not step on each other's toes? How do we ensure that the flying integrity of each airplane is not compromised?<br />
<br />
If someone would tell me that this is similar to the <a href="https://dl.dropboxusercontent.com/u/1018963/Articles/SpotifyScaling.pdf">model</a> popularized by <a href="https://www.spotify.com/">Spotify</a>, then I would respectfully beg to differ. And in short, we can't simply wish for all the benefits of feature teams, while ignoring all the challenges above.<br />
<br />
Let's agree on one thing first: I should be able to shift some of my budget and resources to respond to my current priorities. And at the same time, I should keep all my assets healthy, and all my airplanes in a perfect flying condition. <br />
<br />
<b><i>How can achieve both these goals?</i></b><br />
<br />
It is imperative that I keep a focused, long lived team around every one of my assets. The size of this team will depend on the complexity of each asset. We can even have a team be responsible for multiple assets.<br />
In addition to these asset teams, we can still have our feature teams. But if a feature team would like to modify an asset to deliver a business need, they will have to embed into, or extend, the core asset team. This extended team will have to work in harmony to preserve the integrity of the asset, ensure the compatibility of the new implementation with the existing architecture, and alignment with the long term goals for that asset. Any changes will need to be approved by the core team. If they need to change multiple assets, then they'll have to divide up, or work on one at a time. There could be multiple feature teams extending the same core team for one asset. If this is the case, or if there is an asset that undergoes continuous changes by multiple feature teams, consider expanding this asset's core team.<br />
<br />
This way, you'll ensure that all your airplanes are in a sound flying condition, and that none will go crashing on you, because of the lack of maintenance, or because different crews trying to pull them in different directions.</div>
Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-2379863256506779642014-01-17T18:55:00.004-06:002014-01-17T18:58:49.191-06:00Successful Distributed Development: Discipline, Awareness, and Initiative<div dir="ltr" style="text-align: left;" trbidi="on">
For many of today's IT organizations, distributed development is not optional. Rather it is a fact of life. We should be mindful of the limitations introduced by this mode of operation, while working constantly to mitigate them. It is a challenge that we have to actively tackle. Focusing solely on the tools misses the opportunity for true collaboration. Tools should be considered an enabler; we won't be a able to work remotely without tools. But having the tools, by itself, doesn't guarantee success.<br />
<br />
It's important for us to understand what the ideal is, so that we can strive to come as close as possible to achieve it, given whatever constraints our work environment imposes on us. There is no substitute for face to face interaction. There is nothing better than a co-located, cross-functional team. While we design our work activities, we should strive to converge as much as possible to these ideals.<br />
<br />
I'm guessing all of this is not new to you. Yet time and again, distribute projects suffer from communication breakdowns, misunderstandings, unmet expectations, among many other dysfunctions, often resorting to heavy processes that only make things worse. There are successful distributed teams, however. Below are few of the traits exhibited by those teams. Adopting these traits in your distributed environment can help you converge more to the ideal.<br />
<h4 style="text-align: left;">
Discipline</h4>
<div style="text-align: left;">
I often hear project leaders complain that their teams are not working very well together despite having state of the art telecommunication tools. It is valid to question the tools adequacy, ease of use, usability, etc. It is more important to observe when, how, and even if, the team is using them. Compare your current situation to that of a co-located team. Play a what if scenario: what if everyone had been working in the same room?<br />
You will need a dedicated facilitator, a catalyst of sorts, in every one of your locations, to keep nudging people to reach out to each other. It is not fair to simply expect everyone to remember to get out of their immediate challenge to seek help, or to solicit a different perspective. Software development is an intellectually demanding profession, and it is not uncommon for people to get consumed by their immediate task, and forget to reach out. There are a set of dynamics that can only happen if the whole team is co-located. You could hear a couple of coworkers arguing about a problem you solved yesterday, or perhaps solving a problem you'll face tomorrow. You could have just come out from a planning session, with a new understanding of the product vision, and you may just share that with your colleagues. And there is where the team's facilitator role comes in: to play the above what if scenario. How about we tell the other offices about this? How about we consult with other locations to see if they encountered something similar? But it doesn't stop there.<br />
<br />
A disciplined distributed team will embrace certain values and adhere to a set of practices that ensure that the whole team operates as a single, cohesive unit. It's easy for us to fall back to our comfort zones, or be consumed by tasks, than to always remember to reach out. The facilitators role is to make sure that the distributed team never misses an opportunity to act as a co-located one, whenever possible.</div>
<h4 style="text-align: left;">
Awareness</h4>
<div style="text-align: left;">
Let's face it, we don't naturally know how to effectively work in a distributed environment. We may know we have teammates in other locations. We may be curious how they spend their days, wanting to learn the challenges they face, and looking forward to meeting them in person. This doesn't mean that this knowledge will be translated into changing how we perform our day to day work to adapt and address the limitations of this setting.<br />
<br />
Well, our designated facilitator is there to change this reality. By actively seeking opportunities for cross-site collaboration, events, and feedback sessions, they continuously raise the whole team's awareness that a distributed setting requires a different mode of operation.<br />
<br />
It's only when all the team members are fully aware, that location barriers start coming down. There are certain signs you can watch for to assess whether the team has reached such a level. For example, if you hear statements like "let's wait until you are here next week to discuss this", "I couldn't really explain myself to the other team over the video conference", or "I couldn't really tell if they were happy or mad with this change" are clear red flags. As a contrast, when the team has achieved full awareness, they won't let location be a barrier in effectively communicating ideas, or be a factor in whether or not they collaborate on a task. They become apt at explaining themselves and actively seeking feedback, thus consciously and proactively overcoming the limitations of the tools and the remote setting.</div>
<h4 style="text-align: left;">
Initiative</h4>
<div style="text-align: left;">
While all of the above is good by itself, and while an active facilitator can help tremendously towards this end, we will need to have a team of self motivated individuals to really conquer location barriers. Individuals are the ones that carry out the necessary tasks to accomplish the team goals, and they are the ones who experience first hand the pains and the joys of getting things done. Unless we all are willing to step out of our comfort zones, seek new ways to make things better and overcome the daily challenges, and push each other to continuously improve, we won't have a chance in achieving our collective full potential.<br />
<br />
I'll leave you with a quick tip: when choosing a mode of communication with a colleague, consider upgrading to a higher touch one. If you are about to send an email, how about instant message instead. While you are at it, won't a phone call be even better? But then, what's preventing you from having the discussion face to face? Oftentimes, we overestimate the risk of being disruptive, while underestimating the limitations of more passive forms of communication, like email. If you've experienced long winded email threads, going seemly forever, consider perhaps getting together in a room, virtual as it may be, to discuss things over. The dynamic will be vastly different, in a good way.<br />
<br />
If we can’t all co-locate, we should try to come as close to it as we possibly can. We must be deliberate and disciplined in how we adapt our work to this new environment, maintain constant awareness of the situation, and demonstrate the initiative to challenge its limitations.</div>
</div>
Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-67898918849327680982012-12-11T22:30:00.000-06:002012-12-11T22:30:05.894-06:00We Won't Fail, and It Won't Be Fast<div dir="ltr" style="text-align: left;" trbidi="on">
"Let's try it, and fail fast" may not always be a wise approach.<br /><br />We had an issue: our project was being asked to carry on a piece of work in a predetermined way. We had fundamental disagreements with the proposed approach. We thought it was inefficient, adds unnecessary complexity, and a lot of waste. However, we faced considerable pressure to go along, given certain organizational and political constraints.<br /><br />
The idea was coined: "Let's just try it, and fail fast, when it becomes clear it's really not adequate for the task at hand." As appealing as this might seem, it embodies considerable risk.<br /><br />
Let's say we did that. We proceed developing the application the way we are asked. We waste our time building this interface, integrating to this system, persisting transient, in-process data elements, etc. Now, at what point exactly do we fail fast?<br /><br />Consider the following two observations:<br />
<ul style="text-align: left;">
<li>We always want to make things work, and</li>
<li>Waste is not always acknowledged as such.</li>
</ul>
<div style="text-align: left;">
Even when we work in a less than ideal environment, we always want to make the best of it. Even when our team is asked to accomplish a task while unfairly constrained, we will work hard to reach the best outcome. We will make it work, whatever the cost might be. What we end up with is, still, far from ideal, but we will go the distance, because we don't like to fail.<br /><br />When we then try to show the powers that be how much more complexity or waste there is, it's unlikely that it will be acknowledged as such. For example, if your team's productivity was hindered by a requirement to keep reams of documentations uptodate all the time at every step along the way, with no consumer of this information whatsoever, someone, likely from the powers that be circle, will rise to assert how much valuable that will eventually be.<br /><br />The world of large enterprises today has lots of hidden inefficiencies, originating from silo-ed teams, and competing divisions. While we should always challenge this state, we should also understand when failing fast is not a realistic option.</div>
</div>
Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-19818713998677036342012-06-02T14:15:00.000-05:002012-06-02T14:15:06.918-05:00Profiling Lazy Evaluations<div dir="ltr" style="text-align: left;" trbidi="on">
<i><b>The point:</b></i> Lazy sequence evaluations render the results of simple code profiling useless. Alternate techniques must be devised to correctly find code hot spots.<br /><br />Recently, I coded a function in Clojure that wasn't performing fast enough. Admittedly, I'm new to the language, so I tried the techniques I'm familiar with to find where the function is spending most of its execution time. I used the <span style="font-family: "Courier New",Courier,monospace;">time</span> function, as well as the profile library from clojure.contrib, wrapping the pieces of code from the outside in, trying to close down on the slow parts. After some time, I was not getting any good information. It seemed that at some point, I lose any indication that code is spending any meaningful time executing the code being wrapped up.<br />
<br />That, until I finally noticed something very telling.<br />
<br />In an attempt to speed up a piece of code, I was checking to see of a collection is empty, before processing further. The code looked like this: <span style="font-family: "Courier New",Courier,monospace;">(empty? coll)</span>, and it was taking one second to execute!<br />
<br />Obviously, something was wrong, and that was my understanding of how I can effectively profile this code. Since most of the underlying code was using lazy functions and sequences by default, the empty check caused a chain of delayed execution functions to activate. Well, mystery solved then, but how do I effectively profile this code to find where the time is being spent?<br />
<br />
The most reliable measurements I got were from functions that are self contained: those that process all of its input collections to return a number, for example, or those that didn't process collections at all. <span style="font-family: "Courier New",Courier,monospace;">reduce</span> also exhausts collections. Calling <span style="font-family: "Courier New",Courier,monospace;">count</span> has a similar effect. I was able to use these techniques to enhance the reported time because I new that my code will process all the items in the underlying collections any way, so forcing the realization of the full collection won't change the overall time. But what if that wasn't the case?<br />
<br />So far, I don't have a great answer. For the time being, I'll resort to measuring the most elementary of operations in the code, as they provide the most reliable information.</div>Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-41245178731100802372012-03-29T10:54:00.003-05:002012-03-29T18:16:15.998-05:00Instance Thrashing with Amazon EC2 AutoscalingThis post explains a situation our team encountered when we tried to use ec2 auto scaling, for a particular application. We didn't ended up using auto scaling. Instead, we allocated enough instances in advance to respond to anticipated load.<br /><br /><span style="font-style: italic; font-weight: bold;">In a nutshell:</span> Automatically added instances could not handle the high load, and were immediately removed from the pool, rendering auto scaling unusable.<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1ygTU17QKEHG9xs95UFH0QCEzwWDzWh3f21k9H41_1oxYgaTpPjfyFbP0n2hkaHksAGmDeDuXCrjU-oupUXH5YbAYvMyYkxAqAHMnppXPwahEjydUkXeBXek2GyihGmzxwGlR9_C1zck/s1600/cloudwatch.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 193px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1ygTU17QKEHG9xs95UFH0QCEzwWDzWh3f21k9H41_1oxYgaTpPjfyFbP0n2hkaHksAGmDeDuXCrjU-oupUXH5YbAYvMyYkxAqAHMnppXPwahEjydUkXeBXek2GyihGmzxwGlR9_C1zck/s400/cloudwatch.jpg" alt="" id="BLOGGER_PHOTO_ID_5725431006780352034" border="0" /></a>The graph depicts CPU utilization for 30 ec2 large instances.<br />We started with 4 instances, allowing the group to grow to up to 30 instances. The load test was designed to gradually increase the load to the maximum expected, and then sustaining it at that level for a period of time.<br />The 4 instances behaved as expected, until the point where we experienced request timeouts. New instances were added to the group as expected, but we didn't observe an improvement in response time, or a reduction in dropped requests. This continued until the load was eventually reduced.<br />During the high load period, we kept querying for the number of healthy instances, and always found the number too low. The average was about 7.<br />It's not immediately apparent in the graph above, but it does show instances coming into the pool, at which point their CPU utilization peeks, then soon after, the same instances' CPU utilization going down.<br />We expected that with new instances added to the pool, that CPU utilization will be reduced across all instances. Since all instances are configured the same way, we expected more or less the same CPU utilization across the group. Also, we expected the response time to go down in a similar pattern.<br />The average response time stuck at 60 seconds, which is a signal that instances are dropping requests. The request timeout was set at 60 seconds, thus skewing the average to this number.<br />This behavior led us to conclude that our instances could not start up normally under a heavy load. Further investigation was certainly due, but we never got around to it. It was deemed safer to keep enough instances up to respond to the anticipated load, which we confirmed in a subsequent load test.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com2tag:blogger.com,1999:blog-2291554078837078852.post-90410408483619790162010-07-22T16:07:00.002-05:002010-07-22T16:14:16.474-05:00A Story of a Software ProjectIteration 1: The team picks up the first stories, and makes good progress. The result is showcased to the customers. The mood is encouraging.<br />Iteration 2: The team churns a bit, trying to get the first stories closed and iron out some technology choices.<br />Iteration 3: Not enough stories are being closed. The team's velocity is lower than needed. The PM gets worried, and starts calling meetings and raising red flags. The PM declares that the team needs to catch up, and look for ways to increase velocity.<br />Iteration 4: The team makes good strides, and appears to be back on track. Nerves settle down a bit.<br />Iteration 5: The team's velocity is soaring. The PM says that given the current velocity, the team will meet its target date. The team starts taking care of technical debt.<br />Iteration 6: The business functionality is taking shape. The customers start to get a feel of how the system works. They start asking for modifications.<br />Iteration 7: The customers become more demanding. They notice some gaps between the functionality of the application, and what is needed to run the business. Defects start to creep in.<br />Iteration 8: The PM talks the business out of some of their demands, and the team devises workarounds for some outstanding issues.<br />Iteration 9: Faced with approaching deadlines, the PM asks developers to stay late and work over the weekends to finish the remaining tasks. The code quality suffers and technical debt increases.<br />Iteration 10: The team manages to finish all the remaining tasks. The application is put in production, with minor hiccups. Time to celebrate.<br /><br /><br />Does this sound familiar?<br />The team delivered on time. Is there a problem here?<br />The above pattern causes hardship for the team. The resulting code quality is rarely satisfactory. But we shouldn't be surprised or get overly worked out because of things that are really to be expected:<br />- Estimates are not always met, because they are estimates.<br />- The team takes more time in the first iterations because it's the first time this team tackles this problem.<br />- The customers don't like what they see the first time, because it's the first time they see it.<br />- The project is taking more time than expected because our expectations are just now being reality checked.<br />- The developers are being asked to work extra time because the team's management over-promised. However, the developers had no clue initially whether these promises can be met. Everything looked good on paper.<br /><br />What's the way out of this?<br />This is not an easy problem. What makes it even more difficult is the fact that the team delivered after all, reducing the incentive for change. There are ways, however, to make things better:<br />- Educate all parties on all aspects of the project.<br />- Get the customers involved as early as possible.<br />- Manage all parties' expectations.<br />- Communicate regularly, and facilitate information sharing.<br />- Make it clear that the process of adaptation also includes dates and scope.<br />- Learn from the past. If you've seen this before, it's likely that you'll see it again unless you change your approach.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com1tag:blogger.com,1999:blog-2291554078837078852.post-79043477210558000882009-11-05T18:24:00.002-06:002009-11-05T18:28:19.519-06:00Shouldn't We Local-Optimize at Bottlenecks?The short answer is no. Once we start thinking local, we are heading down the wrong path.<br /><br />Consider what we should do at a bottleneck:<br /><ul><li>Increase the resource's throughput, by increasing its efficiencies.</li><li>Manage the flow in the system to reduce idle time at the resource.</li><li>Add more capacity, by introducing other resources capable of the same function.</li><li>Outsource a portion of the work to resources outside the system.</li><li>Rethink the need for some work to go through the bottleneck.</li></ul>You'd notice that only the first of these points is local in nature, and we should only consider it as an option. It may not be the best one.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-56667745044491070512009-10-23T18:16:00.003-05:002009-10-23T19:17:11.209-05:00What is Wrong with Local Optimization Anyway?How could it be wrong to optimize anything, local or not?<br />Well, if by local optimization we mean having a resource in our system utilize an optimum amount of its inputs, to produce timely, sufficient, but not excessive, output to subsequent steps in the process, then there is nothing wrong, as long as this optimization contributes positively to the system's goal.<br />Note that timely, sufficient, and not excessive, output is defined by subsequent steps in the process. As such, this output might, at times, be zero.<br />Note also that optimizing the whole system may call for one step or process to be removed altogether.<br />If this is how we are approaching the problem of efficiency, then we are not actually doing local optimization.<br /><br />Consider, however, the following approaches to optimization:<br /><ul><li>Increasing the resource utilization to 100%.</li><li>Getting the maximum possible throughput out of every resource.</li><li>Keeping everyone busy all the time.</li><li> Removing all idle time.</li></ul>If this is our focus, then we are heading for trouble, and we are introducing a significant waste in our system.<br /><br />To see why this is the case, consider the following consequences of increasing a resource's throughput in our system to the maximum:<br /><ul><li>More inventory to manage in subsequent processes, If these subsequent processes are not ready, or capable, of consuming all the output.</li><li>More load on subsequent processes, since now they will have more input to process.</li><li>Delays in getting urgent work done, since there is no slack in the system to handle occasional spikes, resulting from natural statistical variations.</li><li>More work being stuck at bottlenecks.</li><li>Increasing demand artificially on up-stream processes, since this demand is not driven by the needs of the market or the ultimate customers.</li><li>Increasing demand on resources required to maintain the high efficiency.</li><li>The process of optimization itself will consume resources. The overall gain may not exceed the cost.</li></ul><br />Local efficiency, then, is a waste. One has to look for alignment with the system's goals to define what, where, how, and how much to optimize, weighing costs against benefits.<br /><br />"But wait," you may remark, "how about local optimization at bottlenecks?", which is, granted, a nice try. But this will have to wait for another post.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-77299726448973621502009-10-07T18:36:00.001-05:002009-10-08T18:35:06.061-05:00Does Waterfall Make More Sense?I came in contact with a few people who were actually content with waterfall.<br />A senior dev explained to me that waterfall is simple, everyone gets it, and it's easy to implement. There are well defined, easy, consecutive steps to be followed.<br />An upper manger was very keen to find ways to convince his company's leadership that waterfall fits his department really well, thus avoiding the drive to adopt agile. From his perspective, waterfall provided predictability. He new at the beginning of the year what his budget is, what projects he will be working on, and what the duration of each projects will be.<br /><br />It also makes sense to design something before building it. If you don't design it before hand, how do you know what you will be building? How do you know how much it will cost? How do you decide if it's worth it? How do you compare it to other options?<br /><br />In our day-to-day life, we demand predictability. Before we offer a job to a carpenter to install new kitchen cabinets, or ask a mechanic to service our car, for example, we want to know before hand how much it will cost, and how long it will take. We are really disturbed when either of these estimates are not met, although we know they are just estimates.<br /><br /><span style="font-style: italic;">So what is the problem in expecting the same from software projects?</span><br />We can always give the example of an apparently simple job gone badly, as when an air conditioning engineer starts asking you when was the last time you cleaned your air vents, or changed the air filters, only to discover that you'd have to pay more and wait longer to have your air system fixed. Let's put this example aside for now.<br />Instead, consider the how likely is the change in your project, from inception to project end, in the following areas:<br /><ul><li>The business needs from your application.</li><li> The specified requirements from your application.</li><li> Your understanding of the requirements.</li><li> Technology.</li><li> The project team's mastery of the technology.</li><li> The people who are doing the work.</li></ul>If your meter reads anything other than low for all the above, you should rethink waterfall.<br />Because waterfall makes sense for certain types of projects. Software development projects are a breed of their own, with a lot of sub-varieties within.<br />And change is inevitable in software projects, because you'll never build the same application twice, for the same business need, with the same people, who have the same experience, using the same technology, will you?Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com1tag:blogger.com,1999:blog-2291554078837078852.post-50375199322286546232009-09-09T17:39:00.001-05:002009-09-09T17:41:54.722-05:00Thin Slice vs. Layer-wise, Or Depth-first vs. Width firstAgilists always advocate for incrementally developing a thin slice of the application, in the form of a user story, that implements end-to-end changes that are required to deliver business value to the customer. This will normally entail changes to the UI, front-end logic, back-end components, databases, service interfaces, etc. This process results in the end user receiving value for every story delivered. The customer gets this value early, while developers have to manage only one small change at a time, and significantly reducing code integration overhead.<br /><span style="font-weight: bold; font-style: italic;">But what about efficiency?</span><br />Some will argue that working on different layers requires different skills, and that there is a waste incurred in task switching, e.g., switching from working on the UI to working on the back end. It can be demonstrated that working width-first, as in developing 10 UI components (A1, A2, A3, ...), , then 10 back end components (B1, B2, B3, ...), etc., will require less time than working depth-first (A1, B1, ...), then (A2, B2, ...).<br /><span style="font-style: italic; font-weight: bold;">But then, who's right?</span><br />In fact, both are. It is indeed more efficient to work on the same task type, since there is less task switching, and there are mass production techniques that could be employed. However, the customer gets value much later, process feedback is delayed, and there are more integration work and problems that are only discovered late in the process.<br />It so happens that in software development, the Agile method is far more superior. The benefits of an Agile process increase in line with the rate of change in the process.<br /><span style="font-style: italic; font-weight: bold;">But what about efficiency?</span><br />To the non-agilists, it should be noted that efficiency isn't the point. Thinking mainly about efficiency distracts from the goal of delivering more value to the customer, at a faster pace. To concentrate on efficiently producing UI components, for example, increases waste by delivering more than the customer needs, by increasing the rework required to adapt to changing/better understood customer needs, by delaying feedback, and by increasing work in progress. This is a form of local optimization that negatively impacts the overall goal.<br />But also, to the agilists, the efficiency argument should be understood. There will be opportunities during development to queue up similar tasks, and leverage specialized skills, and mass production techniques, to gain immediate task efficiency.<br />The idea, then, is to never let efficiency distract you from your gaol, and to only pursue it if it doesn't increase work-in-process, and doesn't delay feedback or customer value.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com2tag:blogger.com,1999:blog-2291554078837078852.post-68533716252070687402009-07-24T17:43:00.004-05:002009-07-24T18:22:41.893-05:00WebSphere Trace, Spring, commons ToStringBuilder, Hibernate, and LazyInitializationExceptionWe've seen the exception below, because WebSphere diagnostics and tracing was enabled.<br />We disabled it by editing the logging properties from WebSphere console/Logs and Trace/select your server's/select Diagnostics Trace Server/Runtime/click Change Log Detail Levels, then editing logging information, and removing unneeded packages after "*=info:" ...<br /><br /><span style="font-size:130%;">Stack trace:</span><br /><br />[2009-07-23 15:56:10,078] WebContainer : 1651 org.hibernate.LazyInitializationException ERROR - could not initialize proxy - no Session<br />org.hibernate.LazyInitializationException: could not initialize proxy - no Session<br /> at org.hibernate.proxy.AbstractLazyInitializer.initialize(AbstractLazyInitializer.java:86)<br /> at org.hibernate.proxy.AbstractLazyInitializer.getImplementation(AbstractLazyInitializer.java:140)<br /> at org.hibernate.proxy.pojo.javassist.JavassistLazyInitializer.invoke(JavassistLazyInitializer.java:190)<br /> at com.mycompany.myproject.domain.MyObject_$$_javassist_20.hashCode(MyObject_$$_javassist_20.java)<br /> at org.apache.commons.lang.builder.HashCodeBuilder.append(HashCodeBuilder.java:452)<br /> at org.apache.commons.lang.builder.HashCodeBuilder.reflectionAppend(HashCodeBuilder.java:413)<br /> at org.apache.commons.lang.builder.HashCodeBuilder.reflectionHashCode(HashCodeBuilder.java:379)<br /> at org.apache.commons.lang.builder.HashCodeBuilder.reflectionHashCode(HashCodeBuilder.java:155)<br /> at com.mycompany.myproject.domain.DomainObject.hashCode(DomainObject.java:14)<br /> at java.util.HashMap.hash(HashMap.java:324)<br /> at java.util.HashMap.containsKey(HashMap.java:470)<br /> at java.util.HashSet.contains(HashSet.java:207)<br /> at org.apache.commons.lang.builder.ReflectionToStringBuilder.isRegistered(ReflectionToStringBuilder.java:135)<br /> at org.apache.commons.lang.builder.ReflectionToStringBuilder.appendFieldsIn(ReflectionToStringBuilder.java:660)<br /> at org.apache.commons.lang.builder.ReflectionToStringBuilder.toString(ReflectionToStringBuilder.java:867)<br /> at org.apache.commons.lang.builder.ReflectionToStringBuilder.toString(ReflectionToStringBuilder.java:339)<br /> at org.apache.commons.lang.builder.ReflectionToStringBuilder.toString(ReflectionToStringBuilder.java:173)<br /> at org.apache.commons.lang.builder.ToStringBuilder.reflectionToString(ToStringBuilder.java:124)<br /> at com.mycompany.myproject.domain.DomainObject.toString(DomainObject.java:24)<br /> at java.lang.String.valueOf(String.java:1505)<br /> at java.util.AbstractCollection.toString(AbstractCollection.java:469)<br /> at com.ibm.ws.webcontainer.srt.SRTServletRequest.setAttribute(SRTServletRequest.java:488)<br /> at org.springframework.web.servlet.view.AbstractView.exposeModelAsRequestAttributes(AbstractView.java:337)<br /> at org.springframework.web.servlet.view.InternalResourceView.renderMergedOutputModel(InternalResourceView.java:206)<br /> at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:257)<br /> at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1183)<br /> at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:902)<br /> at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807)<br /> at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571)<br /> at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:511)<br /> at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)<br /> at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)<br /> at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1095)<br /> at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1036)<br /> at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:118)<br /> at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:87)<br /> at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:832)<br /> at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:679)<br /> at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:565)<br /> at com.ibm.ws.wswebcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:478)<br /> at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.forward(WebAppRequestDispatcher.java:321)<br /> at org.springframework.web.servlet.view.InternalResourceView.renderMergedOutputModel(InternalResourceView.java:236)<br /> at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:257)<br /> at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1183)<br /> at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:902)<br /> at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807)<br /> at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571)<br /> at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:511)<br /> at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)<br /> at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)<br /> at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1095)<br /> at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1036)<br /> at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:145)<br /> at com.opensymphony.module.sitemesh.filter.PageFilter.parsePage(PageFilter.java:119)<br /> at com.opensymphony.module.sitemesh.filter.PageFilter.doFilter(PageFilter.java:55)<br /> at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:186)<br /> at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:130)<br /> at com.ibm.ws.webcontainer.filter.WebAppFilterChain._doFilter(WebAppFilterChain.java:87)<br /> at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:832)<br /> at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:679)<br /> at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:565)<br /> at com.ibm.ws.wswebcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:478)<br /> at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:90)<br /> at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:748)<br /> at com.ibm.ws.wswebcontainer.WebContainer.handleRequest(WebContainer.java:1461)<br /> at com.ibm.ws.webcontainer.channel.WCChannelLink.ready(WCChannelLink.java:118)<br /> at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:458)<br /> at com.ibm.ws.http.channel.inbound.impl.HttpInboundLink.handleNewInformation(HttpInboundLink.java:387)<br /> at com.ibm.ws.http.channel.inbound.impl.HttpICLReadCallback.complete(HttpICLReadCallback.java:102)<br /> at com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:165)<br /> at com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217)<br /> at com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161)<br /> at com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:136)<br /> at com.ibm.io.async.ResultHandler.complete(ResultHandler.java:195)<br /> at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:743)<br /> at com.ibm.io.async.ResultHandler$2.run(ResultHandler.java:873)<br /> at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1473)<br /><br /><span style="font-size:130%;">Analysis</span><br /><br />It seems that enabling logging for the package com.ibm.ws.webcontainer causes WebSphere to log request attributes. During view rendering, the call to request.setAttribute(), in Spring's AbstractView.exposeModelAsRequestAttributes() , which is implemented by com.ibm.ws.webcontainer.srt.SRTServletRequest.setAttribute(), attempts to log model attributes by calling toString(). We add our domain objects as values for model attributes, and our DomainObject implements toString() using Apache commons ToStringBuilder, which recursively walks the whole object graph, causing all references to be initialized. This all happens outside the boundaries of a transaction, and hence we get the exception above for our lazily loaded references.<br /><br />Now this should be searchable, just in case ...Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-2518120949357394672009-06-27T22:45:00.002-05:002009-06-27T23:34:17.950-05:00The Mythical Man-Month(So at last I read it ...)<br /><br />By Frederick P. Brooks, JR.<br />An enjoyable read indeed. The book is for those with passion for software development, from someone who shares this passion.<br />The book contains a good deal of timeless advice, although one might wonder how much of the book is relevant today. I'd offer that most of the book is. There has been times when I was puzzled by the content, and totally missed references to the machines, tools, and procedures. Nevertheless, it's amazing to see how much had changed, yet how much really didn't. In that regard, the discussion around essential (irreducible core complexity) and accidental difficulties (those pertaining to technology limitations, etc) is especially illuminating.<br />Here are some useful pointers:<br /><a href="http://www.cs.unc.edu/%7Ebrooks/">The authors homepage</a><br /><a href="http://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959">On Amazon</a>Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-65413581473847469982009-04-19T12:44:00.002-05:002009-04-19T13:14:05.451-05:00Book: Radical Project Management(Note: this is a personal memory entry)<br />(Note on Note: More on this note later)<br /><br />The book "Radical Project Management" by Rob Thomsett, is very interesting. It introduces a lot of good practices, that are indeed in wide adoption today. I'd recommend it, if not for the actual practices, then for the underlying message to project managers.<br />The Thomsett group website: <a href="http://www.thomsett.com.au/">http://www.thomsett.com.au/</a>.<br />Supporting material and articles can be found here: <a href="http://www.thomsettinternational.com/">http://www.thomsettinternational.com/</a>. Of note are <a href="http://www.thomsettinternational.com/main/articles/path/pathology.htm">Project Pathology</a>, and <a href="http://www.thomsettinternational.com/main/articles/hot/games.htm">Estimation Games</a>. This is not to discount other articles or the site, but these were specifically mentioned in the book.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-68374668549907790672008-10-26T14:07:00.003-05:002008-10-26T14:45:35.901-05:00The Ultimate Continuous Integration ToolWarning: this entry is out there. I can not be blamed for wasting your brain cycles. However, by taking things to the extreme, some value may be gained.<br /><br />I'd like to continuously integrate all the code I write, and I want this to happen very continuously. And that would happen if my code is integrated on every key stroke.<br />My IDE, the build machine (CI server), and the source control system will be very smart. Every time I type key on the keyboard, they will mark a revision of the full code base and start running the build. Let's call this code revision a candidate revision (CR). The build, of course, includes compiling the code, running all unit tests, perhaps even functional tests, and whatever else we want to include. If the build passes, the CR is automatically checked into source control. If the build doesn't pass, then the whole thread is ignored. Obviously, since I won't stop pounding on the keyboard, the very powerful build machine will be running multiple builds at the same time. After all, the computers are not yet that advanced, that we should expect the build machine to be able to run builds instantly. Oh well.<br />At the same time, the system will be doing the reverse: It will be continuously merging all successful revisions from source control (SC) into my local code base, since everyone else is so productively writing code at the same time as I am.<br />How could this work? Well, my IDE will be so well integrated with the super powerful CI server. The CI server will be the single authority deciding whether a build is passing, and thus would be checked into source control. The CI server will be running many builds at the same time, but will act as if all the changes occurred sequentially.<br />The CI server will be receiving a stream of candidate revisions (ok, just deltas) from multiple developer machines. It always starts from a successful source code revision (CR0). Based on the order a candidate revision (CR1) is received, it will run the build and determine if it should be checked into SC. At the same time, the CI server is also processing other candidate revisions (CR2, 3, ...) from myself and other developers. The CI server will be running multiple builds at the same time: a duild for CR1, another for CR2 (on top of CR1), another for CR2 on top of CR0, in case CR1 fails, and so on.<br /><br />Here is a diagram of the process:<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNuwg8lqcHzz3v2CdsBZV2InLBrYh2L4ogxcX4ad5u9XVXVaq0new1-RMvS0C9n1jSxid5N_OO0-reAUU-x8wNqHbCwql2Vim8JayaPLmqp2r4R8ugfn_WlhNutwTuGXynVsH9rTJEqyE/s1600-h/Slide1.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNuwg8lqcHzz3v2CdsBZV2InLBrYh2L4ogxcX4ad5u9XVXVaq0new1-RMvS0C9n1jSxid5N_OO0-reAUU-x8wNqHbCwql2Vim8JayaPLmqp2r4R8ugfn_WlhNutwTuGXynVsH9rTJEqyE/s400/Slide1.jpg" alt="" id="BLOGGER_PHOTO_ID_5261546434780497506" border="0" /></a><br />This would be really nice. Don't you think?Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com1tag:blogger.com,1999:blog-2291554078837078852.post-91499477534643081082008-10-19T22:29:00.003-05:002008-10-19T22:42:16.936-05:00ssh connection with public/private key pair not working on Leopard?<span style="font-weight: bold;">Solution</span>: try passing the private key file to the ssh command using -i:<br /><br /><span style="font-family:Courier New;"> ssh -i identity_file user@server </span><br /><br /><span style="font-weight: bold;">The problem</span><br />I tried setting up a DSA public/private key pair on Leopard. but I didn't accept the default private key file name. Leopard didn't prompt me for the passphrase, and instead was prompted for the normal username/password.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-64277434512674575262008-10-19T21:50:00.005-05:002008-10-19T22:28:24.124-05:00Project Development Knowledge: Sharing and Enduring<a href="http://tabdelmaguid.blogspot.com/2008/10/keeping-project-development-knowledge.html">The previous entry</a> introduced the problem. The following is a discussion around practices that can help address it.<br /><br /><span style="font-weight: bold;">Standups</span><br /><br />Daily standup meetings [1], aka daily scrums, are a great way for the team to share information about the current tasks being implemented by different team members. It's a quick forum, at the end of which, each team member will have basic knowledge of what other team members are up to. This can be very effective in eliminating duplicate efforts, as well as helping team members relate to other work affecting their own. Team members who notice the potential for cooperation and knowledge sharing during the standup, should get together after the meeting to carry on with further, more detailed examination of their work.<br /><br /><span style="font-weight: bold;">Pair programming</span><br /><br />Controversial as it may be, this practice is very effective in achieving higher team productivity through continuous knowledge sharing. The risk of the knowledge leaving the team is greatly reduced. And if we enhance this practice by frequent pair rotation, it's even more effective, as it spreads more knowledge throughout the development team, and it encourages cross-pollination of ideas.<br /><br /><span style="font-weight: bold;">Colocation</span><br /><br />Having to communicate information to other team members highlights a hand-off transition, that is better avoided. It's much better if all the team is there, witnessing and participating firsthand the effort underway, rather than being communicated what happened. Colocation of the team, or as much of the team as possible, is very effective at eliminating that need. And when information is communicated within a co-located team, it's at least an order of magnitude more effective, efficient, and complete, compared to other means.<br /><br /><span style="font-weight: bold;">Code reviews</span><br /><br />When a pair spends a day modeling a piece of functionality, or refactoring a key part of the system, they are likely to want to tell other team members about it. Invariably, everyone in the team is interested to know what others are doing, and how certain problems are being addressed. While pair rotation helps here, getting the whole team to participate in a code review session can achieve some of this benefit to a wider audience. It's also a great forum for seeking guidance, sharing opinions, and exploring novel ways to address issues.<br /><br /><span style="font-weight: bold;">Automation</span><br /><br />Automating a certain task is an excellent way of sharing how it's done. Automation enables other team members to achieve the task, as well as serve as a documentation on how to accomplish it. For example, instead of me asking you how you query for the balance sheet, I can either use an automated script to get the information, or I could learn from the script how it can be done.<br /><br /><span style="font-weight: bold;">Self documenting, readable code</span><br /><br />Code can be considered a misnomer in this regard. We'd like to have code that does not require deciphering. We'd like to be able to know what the code is doing, and how it's done, clearly, with minimal effort. Herein lies an argument against clever programming techniques, that make it harder to reveal intent and side effects. Use code reviews to highlight less than obvious techniques, and have the team workout what it's most comfortable with. For example, if multiple team members are having issues with a piece of code that uses reflection, consider first informing more members how the piece works, and if necessary, consider an alternative implementation. Enabling higher team effectiveness is valued higher than programming cleverness.<br /><br /><span style="font-weight: bold;">Good check-in comments</span><br /><br />These can go a long way in telling the story of project development. A check-in delta tells you what changed. The comment tells you why. When a developer has to go through source code history to understand the rationale behind a certain change or design decision, these comments can be very helpful in that regard. Try to be helpful to the consumer of the comment. For example, instead of typing "implemented story #123", provide more information: "Story #123: added capability to classes x and y to access context z to achieve a and b."<br /><br /><span style="font-weight: bold;">Wikis</span><br /><br />Project wikis have been around for while. Dare I say that they are the norm these days? Wikis are an excellent source of day-to-day knowledge needed by the team. For example, database connection information, URLs to local servers and useful documentation, pointers to tools, project and domain specific acronyms, etc. They are also useful in documenting repetitive development tasks, that are yet to be, or are difficult to automate.<br />One particular use that I wish to highlight is documenting errors that are encountered during development, whether or not they are corrected. For example, if while starting a server we notice that we are getting an exception, we should start by documenting this fact. We then add the solution once we figure it out. Certain problems have the tendency to re-occur, while some are unlikely to occur once fixed. Nonetheless, a similar issue might arise and the fix may prove useful beyond the original use.<br /><br /><span style="font-weight: bold;">A searchable, archived email list</span><br /><br />I have not seen this one on many projects. However, a sizable portion of the project information is exchanged using emails. For example, feature discussions between developers and BAs, the resolution of design issues, technical announcements, in the form of the introduction of a new build tools or code libraries, and project course-changing decisions, to name a few. An automatically archived project specific email list that is searchable can be very valuable for future development. Using this knowledge base can aid in understanding why things came to be the way they are.<br /><br /><br />This list isn't exhaustive, and each one deserves its own discussion. I meant to hint and introduce. None of these practices preclude another, and you can attempt all of them.<br />A bit of warning though: do not over do any of these, and keep the practices light. After all, we are after agility. Don't let these, or any other practices slow down the gemba.<br /><br />As usual, I welcome, and appreciate, the reader's feedback.<br /><br /><br />[1] http://martinfowler.com/articles/itsNotJustStandingUp.htmlTarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com2tag:blogger.com,1999:blog-2291554078837078852.post-88213027166598998082008-10-11T09:11:00.002-05:002008-10-11T09:17:04.873-05:00Keeping Project Development Knowledge: The ProblemThis is a two part entry. This entry is an introduction to the problems caused by the loss of development knowledge, that face project teams. The second part will examine various practices and techniques that can be used to tackle these problems.<br /><br />By project development knowledge, I'm referring to the multitude of information required, and acquired, by the development team, that pertain to the project. This includes technical knowledge, concerning languages and tools, as well as development methodology, processes, business domain, etc.<br /><br />Is there a problem? To help answer this question, consider the following events:<br /><ul><li>A long time team member is leaving the team.</li><li>An issue is identified with the software that the team is building. However, we know we've encountered this problem before. If only we can remember how we solved it.</li><li>A new team member is joining the team, and he's started asking questions about the project, or is encountering some issues setting up his environment, and we have to rely on memory or some veteran team members to answer these questions.</li><li>A BA notices that he is being asked the same question more than once by different team members.</li><li>The tech lead is noticing that the team members are not following coding standards, time and again.</li><li>Two team members are working separately on fixing the same problem.</li><li>A new team member is brought to take care of a code module, that no other team member knows about. For example, when code is inherited from another software company after their contract has ended, a team member is replacing another who was working alone on a piece of code, or when a certain area of the code has been dormant for a long time, that now requires changes.</li></ul><br />These events represent times where project information is missed. This can be attributed to the following reasons:<br /><ul><li>Information leaves the team, along with parting team members.</li><li>Information is forgotten, thus, it needs to be reproduced.</li><li>The information exists, but is not readily accessible; the information is not easily communicated.</li><li>The information exists, but we don't know that.</li></ul><br />How do we tackle these problems? The next entry will discuss some approaches that can help address some of these problems.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-49616117074443572932008-09-28T22:32:00.004-05:002008-09-28T22:38:31.721-05:00Building and Maintaining Outstanding SystemsA quote from "The Leader's Handbook", by Peter <span class="blsp-spelling-error" id="SPELLING_ERROR_0">Scholtes</span>, under 'Outstanding Systems versus Outstanding People', subtitle 'What do leaders do?':<br /><br /><blockquote>Seek to create and maintain outstanding systems. The ideal is: outstanding systems, achieving excellent results, with the ordinary efforts of average people.</blockquote><br />Pretty insightful, challenging, and most notably, sustainable.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com2tag:blogger.com,1999:blog-2291554078837078852.post-87004419797502706612008-08-22T21:34:00.002-05:002008-08-22T21:52:05.529-05:00Incremental Release Always Advisable? Not so fast!The benefits of the Agile practice of incrementally releasing software are well established. However, there are certain situations where it may not be the best fit for your client. This post considers one such scenario.<br /><br />In our example, the client is replacing an existing old system, which is currently being used to run the client's business. The client is launching a new project to adapt a new software package to address many of the shortcomings of the current system. However, the package will require a non trivial amount of customization. At this point, there are two options:<br />1- Implement all the customizations necessary to the new package, migrate the data, then deploy the new package into production, and start using it instead of the old application.<br />2- Implement a minimal set of of customization to the new package, and release that into production. Use the new customized functionality in the new system, while completing the business process in the old system. Once the functionality has stabilized, customize the next step in the business process, deploy it to production, and so on.<br /><br />The benefits of the second approach are well established, and there is no need to reiterate them. I'd like to concentrate on some of the challenges of this option.<br /><br />To aid the discussion, assume that the business process for the client includes the general steps A, B, and C. And assume that we start by customizing the new package to fulfill the needs of step A in the process, continuing B and C in the old system. To be able to do this, we have to perform the following:<br /><ul><li>Transfer the data captured in the new system to the old system, after completing step A.</li><li>Figure out how to trigger the data transfer from the new system to the old: at which event should the data be transferred to the old system?</li><li>Decide what to do if the data may be changed in the new system after it has been transferred to the old system: Should we push the data to the old system every time it changes? If not, at what events should this happen? Is it acceptable to prevent further data changes in the new system after it has been transferred?</li><li>Work out the changes to the reporting needs of the client: Which system is able to provide which pieces of the data?</li><li>Figure out the implications of having the data unavailable in the old system until it has been transferred. For example, a data entry to capture the requirements of step A in the process might be captured in the new system, but the data will not be available in the old system until it has been transferred. Will this have any implications on the business?</li><li>Work out the changes to the client's business process. For example, is there double data entry required to maintain the consistency of the configuration data? How and when should users use the new system as opposed to the old system?</li></ul><br />We also may have to answer the following questions:<br /><ul><li>If there is a data validation step that needs to be performed as part of step A, that requires the knowledge of previous transactions on related data from the old system, how will it be performed?</li><li>Is there a need to sync data changes from the old system back to the new system?</li></ul><br />There are also requirements on data maintenance. The data in the old and the new system must be kept in sync. Otherwise, the syncing operations will fail to map the data to the correct business entities.<br /><br />It should also be noted that the new system may not be amenable to syncing out of the box, and that certain changes to the way it functions may be necessary to allow the integration to work properly. For example, we may need to introduce a new status to mark whether or not the data has been, or is available to be, transferred to the old system.<br /><br /><span style="font-weight: bold;">Implications of the phased release approach</span><br />Given the above, we can see that the second approach has the following implications, as opposed to the first approach:<br /><ul><li>The client can only implement a limited data restructuring or data cleanup in the new system, because this may break, or make very difficult, the process of mapping data between the new and the old system.</li><li>It also follows that there are certain realizations of the benefits of moving to the new system will be delayed, until all the steps in the process have been successfully moved to the new system.</li><li>In a similar vein, the business benefits that may be gained, or process improvements that can be attained, by moving to the new system will be delayed, because the new system will be constrained by the need to synchronize data to, and conform to the business process imposed by, the old system.</li><li>The old system will need to be up and running for a longer period of time, because the time required to build the customizations to the new system, plus coding for the synchronization logic, will require more time than simply implementing the customizations, then go live. I am assuming here that the team size remains the same.</li><li>A careful consideration of the client's budget is due. If the budget is tight, there may not be enough slack to accommodate the extra effort required to add the integration functionality between the new and the old systems.</li></ul><br />And you should also consider:<br /><ul><li>The cost to the business, in terms of delayed ROI, and time to market benefits, resulting from delaying release time until all the necessary functionality is developed in the new system. Your client may not have sufficient incentive to warrant the cost.</li><li>The risk of deploying all the functionality of the new system in one step. It may be the case that your client has ways of reducing the risk to the business, that makes the risk/reward balance weigh in favor of this option. For example, the client may have acceptable manual processes, or is able to revert to the old system, if part of the functionality in the new system is encountering issues.</li></ul><br /><span style="font-weight: bold;">Conclusion</span><br />There are certain situations where an incremental release to production may not be advisable. Although there is evident value associated with releasing fast, the cost and risks associated with this option may outweigh the benefits. Other approaches may weigh better on this scale. This entry introduced a situation where there are factors at play that should be considered before choosing an approach. Such factors include the client's budget, the benefits of going live faster, the cost of developing the integration code, the cost of delaying the release, and the risk in releasing all the functionality of the new system in one step.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0tag:blogger.com,1999:blog-2291554078837078852.post-75253320505840022662007-10-14T14:52:00.000-05:002007-10-14T17:20:47.050-05:00Make Yourself DispensableThere is a lot of unique dynamics to teams that run for a long period of time. Eventually, as people roll on and off the team, some team members will have more information that others. Some members will have a wealth of technical and non-technical information, and their departure can leave the team less confident about its ability to make progress without taking a hit on productivity.<br />More knowledgeable team members have an extra responsibility. The burden mainly lies on them to share their information with other members, and to make themselves more dispensable.<br />While this goes against the common wisdom of job security prevalent in many environments, smart developers do not want to be stuck on the same project for an extended period of time. However, this is typically challenging to achieve. Here are some ideas:<br /><ol><li>Make sure you always pair on the more difficult tasks.</li><li>If you find yourself the only goto person for certain tasks, make sure you have a pair. And the next time they come your way, delegate.</li><li>Get information out of your head to the team's wiki, especially those questions you are the only one able to answer.<br /></li><li>Instead of signing up for difficult tasks, let others dive in, and support them instead.</li><li>Automate your routine tasks. When asked to do something, deliver a do-it-yourself tool instead of, well, doing it.</li><li>Take some time off and see if you get any phone calls :-)<br /></li></ol>Another issue with projects running for a long time is maintaining team history, in terms of lessons learned, how issues were resolved, etc. Let that be the subject of another post.Tarek Abdelmaguidhttp://www.blogger.com/profile/11912000217230377978noreply@blogger.com0