Sparx Systems Forum
Enterprise Architect => General Board => Topic started by: jbaragry on October 31, 2015, 12:12:45 am
-
What are the best techniques for modelling baseline and target architectures?
We're modelling a legacy migration in archimate and want to show how the app landscape will change over time.
Most the elements will remain the same but the connectors will change as new components replace the legacy box. E.g., in this simple example new components E and F will replace parts of D and the relevant clients (B, C) will change connectors accordingly:
(https://docs.google.com/drawings/d/1Qzudd-BbO6f7CPpHVIxrYmopb4aLPne-mTYAeVVXsBM/pub?w=388&h=159
)
If we have everything in the same model then we need many connectors to represent different time periods. If we have the baseline and target in different models then we need to copy all the elements from baseline to target and then manually update the baseline as the target gets implemented.
Are there any other alternatives?
thanks
Jason
-
I am also interested in learning common practices for this scenario.
From my observations, I suspect there is no ideal solution to model this today. An approach would be to have the 'To Be' state modeled in the same project as the current 'As Is' state. Where the 'To Be' Model does not share elements with the current model.
Over time the current state will change and move towards the To Be state and one could save baselines of the model over time as needed.
-
Thanks for the reply.
That is what we ended up doing.
I found one other suggestion on a linkedin forum which recommends a 3rd party MDG tech that creates a profile with built-in time concepts.
E.g: you can then show that certain elements and connectors become relevant after certain projects are finished, etc
It was too much overhead for our use case so we just keep baseline and target models separate and update each manually
(the alternative I mentioned is on a closed linkedin group, but if you google "Best Practices for Baseline and Target architectures" and open the cached version of the linkedin result then you can read the thread - this forum sw barfs if I post the webcache link...)
Jason
-
I am also interested in learning common practices for this scenario.
From my observations, I suspect there is no ideal solution to model this today. An approach would be to have the 'To Be' state modeled in the same project as the current 'As Is' state. Where the 'To Be' Model does not share elements with the current model.
Over time the current state will change and move towards the To Be state and one could save baselines of the model over time as needed.
Exactly. You can support the process by making the as-is and to-be packages (version) controlled and setup security so either part is protected against accidental changes. There should be separate groups to model the to-be and to merge to the as-is.
q.
-
You're making some strange assumptions about what a target is. You can have multiple target architectures and as you travel towards it your target always changes. The target can also be abandoned and replaced by something else.
Thinking that you can just adopt a target start model as your as-is model will always drive error into your as-is model rendering it useless and\or dangerous.
You should be copying your current state into a separate package to create a target state model. Archimate doesn't have a trace relationship to relate the future state back to the current state , so you either just skip this or use an association.
At the end of your project you should be updating the current state on what actually was deployed not your target state model. You target state model is a wish not reality.
-
I think it works best if you use a mixed AS-IS/TO-BE model.
Change your model already if you want something to be changed. The sooner everyone knows about the upcoming change the better.
After all, someone else is also working on a TO-BE situation.
If it so happens that they stumble upon your TO-BE changes it is in everyone's best interest that they base their changes onto the TO-BE situation, and not on the (soon to be outdated) AS-IS situation.
If not you have the risk of modelling a whole part based on the AS-IS, only to find out that, when you are ready to implement the changes, that the AS-IS has been changed in the meantime.
Key to get this working properly is to somehow mark the changes you are doing with something like : User, Date, ChangeID, Remark
That would allow another user to see if the change is recent or not, and if so use the ChangeID to figure out if and when the change is/was implemented.
The scenario where you have multiple candidate TO-BE's is to be avoided as much as possible. If you can't avoid it then use a sandbox to model the different alternatives using copies of the actual elements. Try to keep this "sandbox" phase as short as possible as you'll have to re-do everything after deciding on a direction.
I found that often these different alternatives only need to exist during a short sketch phase.
I think you have to forget about the utopia idea of having a perfect AS-IS model that represents the actual reality. Forget the idea of having a perfect model altogether.
I've never come across a model that is perfect and complete!
Your model will never be correct nor complete, learn to live with it!. It is perfectly OK, as long as it serves your purposes.
Geert
-
I have been very short in my answer since there is no really good answer to such a complex question. It all needs a lot of thinking how to handle as-is and to-be. A lot more than most people thought and still think it would take. Enough stuff for more than one thesis.
q.
-
I have been very short in my answer since there is no really good answer to such a complex question. It all needs a lot of thinking how to handle as-is and to-be. A lot more than most people thought and still think it would take. Enough stuff for more than one thesis.
q.
Hence version controlling a shared model is a pointless exercise. A model is a latticed data store, not a sequential document. We take regular snapshots and save those.
Securing packages against accidental changes is a different kettle of fish - we use user security at the group level with some degree of granularity.
Paolo
-
Hence version controlling a shared model is a pointless exercise.
I can't count how often I argued against the use of VC here and on SO. So in this case the VC is just a way to lock one of the model parts against changes (the reference part).
q.
-
I've never come across a model that is perfect and complete!
Your model will never be correct nor complete, learn to live with it!. It is perfectly OK, as long as it serves your purposes.
A model is only a partial description of reality. That's why people need to throw away the idea that their to-be model becomes there as-is model. The as-is model should always be drawn from reality.
I also disagree with you completely about not having multiple to-be models. If that is what the business has asked for, that is what you need to do.
-
A model is only a partial description of reality. That's why people need to throw away the idea that their to-be model becomes there as-is model. The as-is model should always be drawn from reality.
I also disagree with you completely about not having multiple to-be models. If that is what the business has asked for, that is what you need to do.
"All models are wrong! But some are useful..." (George E. P. Box)
There can't be multiple to-be models! There can only be multiple potential to-be models! :)
Neil Bohr said: "Prediction is very difficult, especially about the future."
As Glassboy says, if the business needs potentially variant futures, for example under a sensitivity analysis, then the modelling technology should provide the ability to represent them, without each one tripping over the other's feet.
If it's an as-is model, then it should be best obtained by reverse engineering. As I have mentioned here before, the OMG CIM, PIM and PSM, for example, are design models. You need to create what I call the PSI - the Platform Specific Implementation to compare as-built with as-designed.
Paolo
-
Is there any new best practices regarding this topic?
We're building an architecture repository based on TOGAF, but have some troubles on finding the best way to organize the repository (architecture landscape) in the time dimension (as-is, to-be, transition, etc.). Time-aware modelling might be useful, but I'd really like to hear what you consider is most practical and feasible.
-
Yes its a difficult task dealing with temporal shift in a multidimensional model. A lot of the modelling tool vendors struggle with it and believe you me I've spoken to quite a few over the last 30+ years. I once implemented a temporal aware system for a government archives system to track which agency was responsible for which function over the course of time so they could find archived artifacts. If anyone has followed government changes over the decades they'll know what a challenge it is to find who looked after something like births and deaths 40 years ago. Anyway the way I handled it for that project was to have each element and relationship have state and date fields then filtered on those to find what was related to what at a point in time and what was current. With that in mind I structure my enterprise model following the usual Archimate domains like Motivation, Business, Application, Data, Technology etc. Under each of those I have catalogs of elements where I put the elements and views where I draw diagrams for current and future states. One idea I'm working through is that each element has a state, for instance, proposed, approved, implemented, retired etc. You can have additional attributes added as tag values like commissioned date and de-commissioned date to record when it went live or when it was retired. The diagrams I create usually have the current state and some future vision with multiple interim states reflecting alternative paths to reach the future state. Some times these diagrams are separate and sometimes if the changes aren't too drastic I'll use diagram filters to show the current state moving to the future state. In the reports I create I'll use the state and dates to filter out what I need to reflect the current, interim or future states. This works most of the time for me except when the links change without a node change. Kind of needs some state on the link to indicate if its current or future and that's the part I'm thinking about at present. But hey actual work gets in the way of fixing up your tool set doesn't it?
So how is my model I structured you may ask? Well something like this
Business
Catalogs - where the elements are strored and shared.
Events
Processes
Roles
Actors
Views - where the diagrams are created
Current State
Business Process
Finance
Human Resources
Information Technology
etc
Target State
Business Process
Finance
Human Resources
Information Technology
etc
Application
...
Data
...
Like it was quoted before by Paolo "All models are wrong but some are useful" from George E P Box. This works mainly for me but not always and it certainly not perfect. One pain is that the elements by default are created in the same package as the diagram so I've got to shift them into the catalog package all the time. (Thinking about that may I should write a script to automate it)
Note that to be able to do this successfully you really need to create your own MDG and not use the out of the box ArchiMate MDG or whatever notation you use as you need to add those extra attributes and additional behaviors using shapescripts, javascripts etc. Its a hard slog doing your first EA MDG but the rewards and additional benefits do come with it. I found you really need a toolsmith on the team to tweak the modelling tool to do what you need and I'm usually end being the toolsmith.
One day I might fix up my EA MDG and share it but its not quite in a state for general consumption as its too specific to my organisation at present.
-
To add to Sunshine's response, we HAVE implemented mechanisms (direct SQL Queries) to move objects into standardised locations (depending upon their context).
We HAVE implemented our own MDG to add tagged values and also to add additional metatypes (both vertices and arcs).
We have implemented a special "Transition" relationship to allow us to indicate which items and arcs transition from baseline to target (allows us to ensure that we have "covered all the necessary bases".
Using the transition relationship we create "state" diagrams for each plateau (and/or project stage) and transition diagrams for the transitions between each plateau. The state diagrams show commissioned/decommissioned/changed/unchanged items and arcs with respect to that state. We use user specific diagram properties to show the: Transition type, Timing, Scope for an element or arc (well not arcs, since EA doesn't YET??? support user specific diagram properties for arcs).
HTH,
Paolo
-
I suggest that you have a look at the Time Aware modelling concept recently introduced (EA13 I think but I stand to be corrected). I have used it to track changes to requirements not Archimate elements so you'd need to see if it works for you the way you expect to make changes. Time Aware models certainly save you copying everything over as they use your existing model except for where you introduce changes.
-
We also structure our enterprise model in packages based on the archimate domain which will contain the various catalogue elements, in addition we have a "View" package which primarly contain diagrams. We would like to have the matrices represented as elements in View package as well, but as far as i know this is not possible in Sparx EA.
I've encountered a white paper (W174) from the open group which adresses this issue as well;
One of the most challenging aspects of a well-run repository is managing transitions over time. In most simple terms, every architecture will exist in up to four states. The current state is what exists in the Enterprise today; this baseline provides the reference for all change. The target state is what stakeholders have approved; this state provides the reference for governing all change activity. Transition states are partially realized targets between the current state in the target state. The candidate state is what has been developed by the EA team but has not been approved for a status sufficient to govern change.
I consider this a reasonable way to structure our model, meaning we'll have something like this;
- Clinical Architecture
- Clinical Candidate
- Business
- Business Process
- Business Role
- Information
- Application
- Technology
- Views
- Workflow Diagram
- Application Commuincation Diagram
- Clinical Current
- Business
- Business Process
- Business Role
- Information
- Application
- Technology
- Views
- Clinical Target
- Business
- Business Process
- Business Role
- Information
- Application
- Technology
- Views
- Clinical Transition - Project X
- Business
- Business Process
- Business Role
- Information
- Application
- Technology
- Views
Do you only consider it neceassy to track different architecture states for views/diagrams?
In my proposed structure mentioned we would have states for the different catalogues (including as elements) as well. The transition relationship which Paolo is mentioning might perhaps eliminate the need to do this?
-
Is there any new best practices regarding this topic?
We're building an architecture repository based on TOGAF, but have some troubles on finding the best way to organize the repository (architecture landscape) in the time dimension (as-is, to-be, transition, etc.). Time-aware modelling might be useful, but I'd really like to hear what you consider is most practical and feasible.
Google TOGAF CONTENT METAMODEL
-
Google TOGAF CONTENT METAMODEL
The TOGAF Content Metamodel says nothing in regards how to structure a repository for the different states for an architecture. The Architecture Landscape chapter does this, but it's a challenge to find a good way to realize this in Sparx EA.
We have partially solved this now by having different states (as-is, to-be, candidate) packages for diagrams/views, and a single package regardless of states for elements (which would be similar to the TOGAF content metamodel). This means the same element can be used in diagrams describing different states, which means we have to find way to describe which state the connectors (relationships) are in. We also have to make sure different people can work with the same element without getting in conflict with each other.
Another option to solve this is to make an element catalog for each state, but this would probably require too much effort to keep updated. Or one might export the elements a project want to use to make sure they're not in conflict with someone else, and then import it back when the architecture has been developed. The GUID of the elements will make sure the exported elements are merged into the existing ones.