Book a Demo

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Modesto Vega

Pages: [1] 2 3 ... 79
1
Thank you Eve and Geert, scripts corrected, including adding the right value to stereotypeEx.

2
An update on this.

The modified script has indeed added ArchiMate3::DataObject to the stereotype but it is only visible through the properties dialog. In the properties pane, it is not visible. Furthermore, the Stereotype field is not populated when exporting the data.

3
“I see you’ve fixed the issue!” I said.  “No, we turned off the speaker,” came the reply.

It depends on your intent: to fix the issue or the symptom.   ;)
You got me laughing.

It is more about supressing the symptom, not fixing it. Something it is way too common nowadays.

Let's bring this thread back on track.

The schema composer has a Schema Set option always defaulted to Common Information Model (CIM), for both New Schema and New Model Transform. Let's assume that we want to do this properly and create a CIM. Is there anyway to specify the package containing the CIM that Scheme Composer should use. I have used this functionality before but I can no longer find it.

P.S.: I know Geert has a really nice add-in but we may not be able to deploy it to our work laptops.

 





4
Having asked Copilot to write me a JavaScript to convert an class element to an ArchiMate3 data object, Copilot has generated the function below.

It not only did not work. It tried a number of things I was not expecting, specifically:
  • element.Stereotype = "ArchiMate3::DataObject";
  • element.StereotypeEx = "ArchiMate3::DataObject";
  • element.MetaType = "ArchiMate3::DataObject";
  • Repository.AdviseElementChange(element.ElementID);

The only way to get this to work was to use just 1, and comment out 2 and 3. I also don't understand the purpose of 3 & 4.

It has been a long week and I don't think I have the bandwidth to understand this, perhaps somebody can shed some light on why the AI will add 3 & 4. 2 could just be a case of dumb AI.

Code: [Select]
function convertToArchiMateDataObject(element) {

    Session.Output("   🔄 Converting: " + element.Name);

    //
    // STEP 1 — Clear stereotypes completely
    //
    element.Stereotype = "";
    element.StereotypeEx = "";
    element.Update();


    //
    // STEP 2 — Apply ArchiMate Data Object stereotype correctly
    //
    // EA requires this exact format: <MDG>::<Stereotype>
    //
    element.Stereotype = "ArchiMate3::DataObject";
    element.StereotypeEx = "ArchiMate3::DataObject";

    //
    // STEP 3 — Force MDG meta-type (important!)
    //
    element.MetaType = "ArchiMate3::DataObject";


    //
    // STEP 4 — Save and refresh element
    //
    element.Update();
    Repository.AdviseElementChange(element.ElementID);

    Session.Output("      ✔ Converted to ArchiMate3::DataObject");
}

5
I don’t disagree with any of that but it depends on the context, like almost everything.

If XML is used for systems to interoperate directly via an API or messaging, I would expect XSDs and wouldn’t expect to have to reverse engineer an API payload or message. Although, I have now seen plenty of contractless APIs without XSDs or with minimal ones.

But XML is used for many other things, including full data extracts, large enough to cover most posible combinations. In this case, not having the functionality to infer/reverse engineer an XSD does make our jobs easier. And yes, I know others tools can be bought, but that also complicates our jobs.

6
What baffles me most about this thread is not having potentially lost some functionality that is achievable with other tools.

What baffles me most is reading that people nowadays have an issue with a key cornerstone of modern science: inference.

Inference also used to be a key cornerstone of all the technical work I used to do, very often starting with requirements.
If the inferences were wrong, they got revised and tested again.

In the age of AI, I’ll rather reverse engineer/infer an XSD based on a large and representative XML file than second guess a development team.

Let the data tell me a story, instead of forcing the data to tell me the story I want.

7
I am relatively certain that I have done with an earlier version of Sparx EA, many years ago when features did not change and you knew what you were doing or what you needed to do. Please accept my apologies for the slight rant.

I know if guess work but XML has become guess work, there aren't many XSD's published.

I also know there are many tools out there but they tend to cost money and if they are online there is always a data privacy/sensitivity issue.

8
Perhaps I have dreamt or forgotten how to do it. From memory, with previous versions of Sparx EA - i.e., before v16 - it was possible to reverse engineer an XML and get a decent XSD.

With v16 this doesn’t seem to be very straightforward. Importing into the Schema Composer seems to be the only plausible route but CMI is the only option and a reference package is always needed.

What have I forgotten? Did I dreamt it?

9
Since we gave away the notion of the browser indicating any form of holonymy, we basically haven’t had that problem.  Only items that require referential nesting are nested.
The issue that I have always faced with Sparx EA, specially with enterprise level or multi-project repositories, is that I have never managed to dispel the notion, for me and some of my colleagues, of the browser not indicating a semantic relationship between the whole and the part, irrespective of whether there is referential nesting or not. The browser looks like a folder structure, is used like a folder structure, and it is way too easy to create a very deep folder structure.

All other items are in a "flat" structure (by type).  The diagrams are in a separate branch of the repository and can be structured however you like.  Consequently, we don't have as much of a problem as you do.
I have used Sparx EA, like that, but it is very cumbersome and leads to duplication. This is because when creating an element from a diagram Sparx EA always places the element in the package containing the diagram. As a result, we switched to having elements and diagrams of a similar type in dedicated packages but this still does not solve the duplication problem.

Ideally, I would like to restrict the use of the browser to advanced users and use views for most people contributing to a model. Of course, it does not help the way Sparx Systems has implemented views, including the capability of creating (non-dynamic) views under the root node. TBH, I have never understood what views are and how they work, other than another way of referring to a folder.

10
Hi Mauricio,

The PackingComponent is one of those elements I have previously described in the forum as having a dual personality - i.e., it is an element but it also a package. It exists twice in the repositories.

With previous version of Sparx EA (v13 to v16), we tried using it as a component, or to be more specific as a way to model an Information System Service or a complex application. We gave up because of a combination of issues importing and exporting data and limitations on how this element is rendered by Sparx EA when used in diagrams.

11
I take a very rigorous approach to nomenclature.  I reserve the term “nesting” for physical nesting, in which the identity of an item depends on its holonym.  I use the term “Visual Embedding” to describe the ability to move an item into or out of (just as important) another item on a diagram.
I know, and was aware I could get in trouble by using a less rigorous approach to nomenclature. But I will blame the forum poor search engine for not allowing me to quickly find one of the posts where we discussed this is the past. Perfectly happy with "nesting" meaning physical nesting, and using "visual embedding" to describe the ability to move an item in or out of another item on a diagram. Having said this, Sparx EA, at times, doesn't do "visual embedding" very graciously.

Also "nesting" is really a form (de-)composition but, AFIK, there is no relationship for it in any modelling language I am familiar with. It this not the same to represent "car" as a group of interrelated elements - e.g., engine, wheels, chassis, body, and so on - or a one element with (physically) nested elements.

As Guillaume and I have shown, no redesign of the browser is required; all that is required is to redesign the concept in our brains.  It is easy enough to implement.
Just to clarify my point, I know you and Guillaume have proven you can do this with an MDG. Actually I have done it myself. My point is two fold:
1 - From previous interactions with Sparx Systems through the support desk, and with Eve and other people in this forum, I got the impression that changing the way the Grouping element works is considered a very significant change not very high up in their plans.
2 - From a usability point of view, the package browser, which is essentially a physically nested structure (almost an extended folder structure), often makes collaborative work on a single repository difficult and leads to unwanted data duplication. The collaboration problem would be easier to solve if a view, a Sparx EA view, could be constructed showing how elements are/or could be visual embedded. If the browser could be hidden for certain users, this view could be the main entry point for any work done by some of the users we typically collaborate with. After all, some relationships are a natural visual embedment: specialisation, aggregation, and composition.



12
I now follow a similar approach to Geert, with possibly one exception I am still thinking about.

I typically have the following:

1) A conceptual model, using UML class diagrams, without attributes. Depending on the size of the MDG, I will use multiple diagrams.
2) A Logical model, again using UML class diagrams, with attributes. I use as many diagram as I used in the conceptual model.
3) The EA profile used to generate the profile.
4) And, often but not always, a poster leveraging the conceptual model, typically breaking down the model into, for example, aspects or layers.

I always keep all the above in the same repository or branch of the repository.

The exception I have not worked out yet, is how well the above works for an MDG based on, for example, Sparx EA implementation of ArchiMate. The main reason is that is that I don't want to get caught in reverse translation exercise.

13
We can indeed used Guillaume’s solution, I think I used it once in the past.

The Standard says: (4.5.1 Grouping)
“The grouping element is used to aggregate or compose an arbitrary group of concepts, which can
be elements and/or relationships of the same or of different types.”
As noted by Paolo, in a much more sophisticated language, the issue with the way this is implemented is that, from memory, it is not possible to draw aggregations and compositions between a grouping and the elements it groups.

This, of course, brings to the forefront the way the browser was designed and implemented, and one of Paolo's favourite subject: visual nesting vs physical nesting (Paolo, sorry for paraphrasing).

The issue is that the browser enforces physical nesting where an element can exist only in one package or as a composite of another element - i.e., the element can only have 1 parent element (package or standard). This means, the same version of an element can only exist once in the browser. I suspect, Sparx Systems sees implementing the ArchiMate grouping, which could be argued is a form of visual nesting (not physical nesting), as completely redesign of both the browser and the underlying data model; something they may not want to undertake.

14
A key challenge in the Sparx Systems implementation of both the UML Package and the ArchiMate Grouping element is the conflation of an element's type with its role as a container. This design choice, while pragmatic in some respects, restricts the ability to create relationships to or from the container itself and other "proper" elements, fundamentally complicating modelling efforts such as the one described by the OP.

15
I've had the same experience. It's a great timesaver, if you know what the result should look like.

If you have no clue, it might be very time consuming to figure out where exactly it goes astray.

Geert
This seems to be the problem with all AI tools, if you know what you are doing or what you are expecting, they can be a good productivity tool. If you don't they just generate rubbish.

Pages: [1] 2 3 ... 79