The Corepula Method, a data modeling methodology that helps data models evolve and stay fluid
Press Release (ePRNews.com) - NEW YORK - Aug 01, 2017 - The Corepula Method, a data modeling methodology that helps data models evolve and withstand business requirements changes
As a business evolves, its tactics and strategies change. This results in a steady stream of business requirements that often are in direct conflict with those established only a few weeks prior. What complicates matters is that rapidly changing business requirements have the potential to break existing data models and thrust IT teams into a downward spiral. Data models are especially important because they serve as any application’s backbone. What methodology should we follow to create data models that are less brittle and susceptible to changes, and at the same time more fluid and dynamic?
The Corepula Method is deeply rooted in the ideas championed by Chris Date and Bill Inmon. It provides a set of guidelines for designing temporalized database schemas that can support diverse requirements such as Enterprise Data Warehouse (EDW) and Master Data Management (MDM). The Corepula Method is based on the notion that most, but not all, data will undergo some type of change. Static pieces of information, that which has been defined by the business as unchangeable, should be collected and stored separately. The Corepula Method splits attributes into static and non-static and applies 6NF principles to the non-static attribute group. The resulting schema falls into a hyper-normalized design pattern that helps absorb and amortize various schema changes. Database schemas based on the Corepula Method enjoy the following overall benefits:
· The ability to model enterprise data layers in a way that is conducive to change. Semantic changes, driven by constantly shifting business requirements, can be incorporated into already existing models with relative ease, thus improving the turnaround time for IT deliverables.
· An overall reduction in project costs. Database schemas can be proactively managed and expended well into the future. Less rework is required, allowing IT professionals to focus on code optimization instead of constant schema redesign.
· Well-defined data modeling building blocks that produce predictive, robust, and consistent database schemas. The predictive nature of the resulting database schemas simplifies construction of Extract Transform Load (ETL) layer code, making it cheaper to build and maintain. Schemas share a similar structure and access pattern, and facilitate “build once, reuse multiple times” ETL design patterns, since ETL templates can be built once and reused multiple times throughout the same project.
· Indexing and partitioning scheme guidelines. Because the schemas result in the creation of well-defined and predictive sets of database tables, recommended indexing and partitioning schemes already exist. Physical schema developers must adhere closely to the indexing and partitioning guidelines to produce highly tuned and optimized database systems.
· Natural treatment of NULL attribute values. Unless business requirements explicitly state otherwise, a piece of data with a NULL (or unknown) value must be stored according to the method’s schema. Integration specialists should not invent data. For auditing purposes, a NULL value in the source system must match a NULL value in the method’s schema.
· An insert-only data loading paradigm. Data updates and deletions are not allowed. Insert-only data loading creates a high-octane, extremely efficient ETL process that loads data in parallel.
· Highly efficient storage and complete removal of data redundancy. Storage costs are declining, but are still a substantial part of any IT budget. There is also another, more subtle benefit to the Corepula Method. Some Database Management System (DBMS) vendors are now incorporating raw data size into underlying database licensing costs. By eliminating data redundancies, businesses can potentially save money.
The Corepula Method’s modeling guidelines explain how to:
· Build EDWs.
· Use standard naming conventions during the design phase to keep data modeling object names consistent across the enterprise.
· Apply method-specific coloring conventions to improve users’ understanding of the resulting data models. Coloring improves comprehension and helps ferret out logical bugs. By using the recommended coloring scheme, data modelers create impactful data presentations and models that are both focused and lucid.
What makes the Corepula Method so effective is a combination of method-specific design principles, coloring conventions, and naming standards to create impactful data modeling solutions that are clear to both business and IT groups. The Corepula Method relies on familiar Entity Relationship (ER) principles and can be used with any data modeling notation and tool of your choice. You don’t need to purchase specific templates to identify method-based modeling objects. The data modeling diagrams typically rely on a widely adopted Information Engineering (IE) data modeling notation that can easily be ported to the data modeling notation of your choice (such as Barker or IDEF1X). A wide array of commercial and non-commercial data modeling tools offer the ability to vary an entity’s background color; this feature is especially handy with the Corepula Method’s modeling solutions. The hyper-normalization approach increases the number of underlying data objects that modelers must manage. The color scheme assists with ferreting out design errors and improves each model’s expressiveness.
If you would like to read more about the Corepula Method data modeling methodology, please visit our website @ www.corepulamethod.com and download data modeling whitepaper.Source : Corepula Method