As Michael pointed out, simple Alter is not viable if data is to be preserved. I think most users would want the alter DDL statements to preserve data in the target DB.
No, that's not true; at least not in my experience. I had worked with Oracle's Designer tool to build dozens of DB schemas, and this tool never ever assumed 'preserve data' when generating the alter DDL statements.
In my experience, when developing applications and the back-end DB schemas, you'll want complete flexibility to revise the schema design during development.
We either had a set of manual 'load lookup table' and 'load test data' scripts, or had an off-the-shelf 'test data generator' utility.
In the 1st case, we manually updated these scripts as we revised the schema. In the 2nd case, we just re-ran the 'test data generator' utility, which could read the updated schema and revise its generation accordingly.
We always had a 'drop all' or 'truncate all' script that we ran in between revisions to the schema objects, and then we'd re-run whatever scripts on hand to pre-populate (or what I called 'bootstrapping') the schema.
At certain milestones, we'd run a fresh 'generate new' instead of 'alter ddl', to get DDL scripts that create brand new schema objects. And then the cycle would start anew, until we got to implementation.
---
In the case where you're running an 'alter ddl' on a production DB schema, well that's a different beast, and you'd want to manually write the 'move data from old to new' schema objects.
So, what I'm saying is (assuming Sparx hasn't already spent time on this feature) users probably won't require a 'preserve data' option when generating the ALTER DDL.
Cheers,
gary