Author Topic: Use Case Metrics  (Read 8583 times)

thomaskilian

  • Guest
Use Case Metrics
« on: January 28, 2004, 02:32:44 am »
Hi there,
anybody with expirience in use case metrics? EA currently only offers a stub, you can enter risks, metrics a.s.o. but with no possibility to evaluate them. Is there a simple way to set up an Excel spreadsheet to evaluate these values?

Cheers,

Thomas

Bruno.Cossi

  • EA User
  • **
  • Posts: 803
  • Karma: +0/-0
    • View Profile
Re: Use Case Metrics
« Reply #1 on: January 28, 2004, 05:19:30 am »
Hi,

I am not sure what you mean by evaluating the values, but the ECF and TCF factors that you enter have input to the Use Case metrics - see Project > Use Case Metrics window.
If you still want to get the TCF and ECF into Excel, it is easy. Just look for tables t_ecf and t_tcf in your .eap file (open it in MS Access) or in your database repository, and load the contents into Excel.

Hope this helps.

Bruno

thomaskilian

  • Guest
Re: Use Case Metrics
« Reply #2 on: January 28, 2004, 06:32:25 am »
I just wanted to know if anybody has experience in doing so. The Projcet/Use Case Metrics only give a rough measure and do not cope with risk factors (as far as I can see).  Again: does anybody take advantage of the metrics that can be applied to use cases and how does he/she work with that?

Bruno.Cossi

  • EA User
  • **
  • Posts: 803
  • Karma: +0/-0
    • View Profile
Re: Use Case Metrics
« Reply #3 on: January 28, 2004, 03:50:13 pm »
I generally apply the risk factors to the final number, not on the Use Case level, as the risk is global. Applying it on the Use Case level would suggest the risk and it's potential impact to be equally spread, which is not the case.
Major challenge as I see it lies somewhere else though. Number of hours expected to spend on building an average Use Case. I find this number to be the most misunderstood one while the most crucial one at the same time.
First of all, there is hardly a way of saying which number is correct since the level at which the Use Cases are defined tends to differ project by project. Besides that, increasing number of companies outsources their development while doing the analysis themselves. The time expected to be spent on a Use Case should reflect the fact that the company does the analysis only, not the development.
Overall, the time spent on each Use Case depends on the methodology used. On the number of iterations, their length etc. All of this can be quantified (RUP has done that to a certain extent) and applied to the formula.
To summarize: yes, we are using Use Case metrics routinely, if slightly differently than suggested in the EA, and have an excellent experience with it.

Bruno

thomaskilian

  • Guest
Re: Use Case Metrics
« Reply #4 on: January 29, 2004, 04:26:06 am »
Bruno,
thanks for your feedback.

I was surprised by the numbers that I received the first time I made a use case model from scratch. A couple of guys worked on a project for a certain time without any concept  :-/ It was stopped after much time being spoiled. Later I made a use case model in 2 or 3 days and ended up in an estimation of something above 1000 days of work. This effort was asserted by people who conducted a similar project. For the time being I think that a rough estimation is better than no estimation  ;D However, I'd like to have more accurate numbers in the future. I'll try your approach and will add the risk factors to the global numbers.

Cheers,

Thomas

Bruno.Cossi

  • EA User
  • **
  • Posts: 803
  • Karma: +0/-0
    • View Profile
Re: Use Case Metrics
« Reply #5 on: February 03, 2004, 09:59:20 am »
Hi,

I keep responding as I find time, so  am turning this discussion thread into a series :-)
Back to the Use Case metrics... typically, when you plan a project, you will break it into functional areas (e.g. Order Entry, Billing etc.). For each of those we create a package in the EA.
We do a quick analysis and define the Use Cases in each of the areas (i.e. inside each package).
Then we do the Use Case metrics (I will post more here when I find time) on the package level, not on the Use Case level.
As we estimate for each Use Case it's complexity, chances are that some of the Use Cases have been underestimated and some overestimated. By doing the metrics on the package level (i.e. on a group of Use Cases), we allow for the imprecisions to balance themselves out.
By doing the metrics on a Use Case level instead, the final result would be the same of course. The problems we find with it are:
- hundreds of the Use Cases make it impossible to keep track of the work
- in the course of the project, complex Use Cases get sometimes broken down into separate Use Cases, which would have impact on the project plan
- metrics on Use Case level will cause the actual results to be imprecise almost every single time (sometimes over, sometimes under) which decreases the level of trust in the project plan.

Bruno

thomaskilian

  • Guest
Re: Use Case Metrics
« Reply #6 on: February 05, 2004, 04:08:20 am »
Bruno,

I wonder if you have any idea on how to make 'useable' estimates for single phases. How do you manage all use cases to appear in subsequent phases and how to make all of them equally weighed.

Also, do you feed back measures? In other words: do you keep track of your first estimate, compare it to the real numbers at last and make a 'use case weight' (hours/use case) for your company. Or furthermore do you have multiple weights for say different customer projects?

Thanks for your feedback :)

Thomas

Bruno.Cossi

  • EA User
  • **
  • Posts: 803
  • Karma: +0/-0
    • View Profile
Re: Use Case Metrics
« Reply #7 on: February 05, 2004, 11:04:42 am »
Thomas,

once again, a brief response, hopefully I will expand on this later.

You have touched on an interesting subject, talking about different phases of the project. This is where all the estimation gets tricky. What stages are there? This is driven by the methodology you use, and of course by the scope of your project.
Some of our work has been purely business analysis and requirements modeling, the development itself has been outsourced to an outside company. In theory we might have not even known what technology were they using, what methodology, what was the quality of their resources, as the development stage would be managed by them. They would do their own estimates, since they were the ones who had to manage their portion of the project.
In that case, the only stages we would estimate for would include the analysis and testing (both of them expanded on and possibly broken down into several stages, too).
Also the methodology makes a big difference - more iterations of work within shorter period of time, with closer involvement of the stakeholders tend to produce results quicker, however are more taxing for the project team, require stronger skills set and possibly more resources.
What I am getting at is that the factors you apply to the Use Case metrics (what you are refering to as Use Case weight) depend on many things (not very helpful so far, is it? :-) )
Chances are that you are following much the same methodology on most of your projects though, so this might not be a problem for you. You could then apply the same factors to all of your projects of course.

About the second part of the question, yes I do collect the actual information and compare them to the original estimates - after a while, the numbers are surprisingly very close!

Overall, what I like to do in the beginning of a project is to go through the exercise of translating the scope document into the set of the Use Cases - i.e. building the Use Case diagrams without any details on the Use Cases. No scenarios, constraints, nothing. Only the Use Cases and for each of them their complexity (easy, medium, difficult; occasionally I have used 5-degree range instead).
This is relatively short exercise, dependent on the scop of the project of course.
After that I run the Use Case metrics and apply the "Use Case weight" (which I keep adjusting after every project).
This way I can get very good estimates and build very good project plans with very little effort. Best of all, I can justify and defend them to my clients!

I am enjoying this discussion, it is making me think :-)

Bruno

thomaskilian

  • Guest
Re: Use Case Metrics
« Reply #8 on: February 06, 2004, 03:08:30 am »
Bruno,
thank you very much for your replies. I was able to learn very much and look forward to applying this information in my next projects.

Probably there is one more thing of interest: project risks. I guess you also have some kind of methodology on how to cope with them. Earlier you sayed that you apply risks globally. Do you have any 'formula' (I know this is a quite silly question)?  I did not find a usefull way to handle social risks. That is large teams tend to have more potential for social interference than small teams, adding some kind of logarithmic factor.

Btw.: do you make use of EA's buildt-in tables or do you have a separate instrument?

Thomas

jmr

  • EA Novice
  • *
  • Posts: 3
  • Karma: +0/-0
    • View Profile
Re: Use Case Metrics
« Reply #9 on: February 11, 2004, 01:43:18 pm »
Bruno & Thomas,

I find your discussion interesting.  I've only been using EA for 2 months and love it so far.  I recently began trying to use the estimation features and quickly discovered the problem Bruno alluded to about the complexity range for the Use Case elements.

Since the funtionality that is captured by a specific Use Case element is totally in the control of the modeller, it seems that a 3 value range of complexity (or a 5 value range when using Extended Complexity) is much too narrow.

Even in the first project that I attempted to estimate, the 11 Use Case elements all had to be considered having Easy Complexity, but it would be far more accurate to be able to assign intermediate values.  For example, since all 11 elements were considered easy based on the definitions provided, they all are assigned a value of 5 UUCP.  But this interprestation of "easy" is very relative and could produce inaccurate estimates.  Based on the element Complexity definitions, I would tend to assign these 11 elements with UUCP values that range from 1.5 to 6.5, noting that these values in the algorithm translate directly into development hours.

Does anyone know how to apply a more variable range of Complexity values to Use Case elements?  

If Sparx were to change the field value from a text dropdown of 3 or 5 values to a numerical field with an allowed range of 0.0 to 30.0, this would fix this problem.  It would also allow the modeller to refine these values with experience.

I do realize that one could use the extended range of 5 values and then reduce the Default Hours per UCP to achieve a similar result.  But being able to assign a numerical Complexity value over a continuous range from 0 - 30 would give a more accurate result.

Would appreciate any comments on this!

thomaskilian

  • Guest
Re: Use Case Metrics
« Reply #10 on: February 12, 2004, 01:56:26 am »
Welcome jmr,
your request seems to make sense - on the first glance. But how do you measure the use case's complexity at all? You do that from your stomach since you don't have a real measure. If you are going to increase granularity you have to stick to some kind of calculus (based on experience?). At the end you have to tell the number of work-days per use case in advance. Honestly speaking, I'm sometimes lost in even distinguishing between these three stages (simple, medium, complex). My approach is simple: all use cases are medium at first. Then I decide rather quickly which ones are simple or complex.

In your example with 11 use cases there is obviously a large bias - you could also try a different estimation approach. But as you go to real big scenarios, you'll find mathematics will do the rest (not quite shure about the name: rule of big numbers).

Cheers,

Thomas

thomaskilian

  • Guest
Re: Use Case Metrics
« Reply #11 on: February 12, 2004, 05:33:56 am »
Also I do not agree to jmr's 30 levels of granularity, I really miss a 'zero level'. Some of my use cases are abstract (means that they are realized by several alike use cases). Other use cases I just put in the model for completeness. Both types actually have zero complexity - so I would like to vote for this option.

Anyone to agree?

Bruno.Cossi

  • EA User
  • **
  • Posts: 803
  • Karma: +0/-0
    • View Profile
Re: Use Case Metrics
« Reply #12 on: February 12, 2004, 11:37:36 am »
Hi Thomas,

actually, that would help a lot. Ability to exclude a particular Use Case from the Use Case metrics would be useful in cases when I end up witha Use Case outside of the boundary as well.

Good idea!

Bruno

Bruno.Cossi

  • EA User
  • **
  • Posts: 803
  • Karma: +0/-0
    • View Profile
Re: Use Case Metrics
« Reply #13 on: February 12, 2004, 08:57:00 pm »
Thomas,

yes, social risks are a bit of a headache and as hard as I try, I have not found a way how to handle this objectively yet... still trying though! Would be very interested in hearing your experiences and ideas.

I do make use of EA's tables, I try to avoid duplicating information. We have developed a tool that connects to EA's repository and automatically calculates and generates a Project Plan in MS Project. Adterwards it keeps updating the project plan when changes are made in the EA. I will post a version of the tool here within a week, as soon as we finish testing it in more detail... if anyone is interested.

Bruno


Quote
Bruno,
thank you very much for your replies. I was able to learn very much and look forward to applying this information in my next projects.

Probably there is one more thing of interest: project risks. I guess you also have some kind of methodology on how to cope with them. Earlier you sayed that you apply risks globally. Do you have any 'formula' (I know this is a quite silly question)?  I did not find a usefull way to handle social risks. That is large teams tend to have more potential for social interference than small teams, adding some kind of logarithmic factor.

Btw.: do you make use of EA's buildt-in tables or do you have a separate instrument?

Thomas


Bruno.Cossi

  • EA User
  • **
  • Posts: 803
  • Karma: +0/-0
    • View Profile
Re: Use Case Metrics
« Reply #14 on: February 12, 2004, 08:59:35 pm »
Hi jmr,

I have been going through the same thought process a while back, but eventually I realized that I was not able to asign the complexity on such a precise scale. Usually I find three level satisfying (on rare occasions I would like five, but usually not).
I believe that if you have Use Cases ranging in complexity on thirty different levels, then your Use Cases are probably defined incorrectly, some of them on too low a level, some on too high a level.

Bruno

Quote
Bruno & Thomas,

I find your discussion interesting.  I've only been using EA for 2 months and love it so far.  I recently began trying to use the estimation features and quickly discovered the problem Bruno alluded to about the complexity range for the Use Case elements.

Since the funtionality that is captured by a specific Use Case element is totally in the control of the modeller, it seems that a 3 value range of complexity (or a 5 value range when using Extended Complexity) is much too narrow.

Even in the first project that I attempted to estimate, the 11 Use Case elements all had to be considered having Easy Complexity, but it would be far more accurate to be able to assign intermediate values.  For example, since all 11 elements were considered easy based on the definitions provided, they all are assigned a value of 5 UUCP.  But this interprestation of "easy" is very relative and could produce inaccurate estimates.  Based on the element Complexity definitions, I would tend to assign these 11 elements with UUCP values that range from 1.5 to 6.5, noting that these values in the algorithm translate directly into development hours.

Does anyone know how to apply a more variable range of Complexity values to Use Case elements?  

If Sparx were to change the field value from a text dropdown of 3 or 5 values to a numerical field with an allowed range of 0.0 to 30.0, this would fix this problem.  It would also allow the modeller to refine these values with experience.

I do realize that one could use the extended range of 5 values and then reduce the Default Hours per UCP to achieve a similar result.  But being able to assign a numerical Complexity value over a continuous range from 0 - 30 would give a more accurate result.

Would appreciate any comments on this!