In today's world, software systems work in tandem with one another. Most of these systems have huge data that are shared and used by several applications that are a part of the system. For example, a typical Enterprise Resource Planning system as a minimum will have a Customer Master, an Item Master, and an Account Master. It therefore, has become imperative that data being of prime importance in the company needs to be managed with utmost care and maintaining highest standards. Without proper data management, a company cannot function smoothly and efficiently.
When critical business data is duplicated and in conflict across systems, it is difficult to establish its reliability or prove where it came from. These challenges are byproducts of poor data governance and can result in productivity loss, higher operating costs, profit erosion and noncompliance.
There are some very well-understood and easily identified master-data items, such as "customer" and "product." In fact, many define master data by simply reciting a commonly agreed upon master-data item list, such as: customer, product, location, employee, and asset. But how you identify elements of data that should be managed by a master-data management system is much more complex and defies such rudimentary definitions. In fact, there is a lot of confusion around what master data is and how it is qualified, necessitating a more comprehensive treatment.
There are basically five types of the data in an organisation, which are of different nature that they need to be considered-
Unstructured-This refers to the data which does not have any particular structure. This is data found in e-mail, white papers like this, magazine articles, corporate intranet portals, product specifications, marketing collateral, and PDF files.
Transactional-It specifically relates to all such data related to sales, deliveries, invoices, trouble tickets, claims, and other monetary and non-monetary interactions.
Metadata-This is data about other data and may reside in a formal repository or in various other forms such as XML documents, report definitions, column descriptions in a database, log files, connections, and configuration files.
Hierarchical-Hierarchical data stores the relationships between other data. It may be stored as part of an accounting system or separately as descriptions of real-world relationships, such as company organizational structures or product lines. Hierarchical data is sometimes considered a super MDM domain, because it is critical to understanding and sometimes discovering the relationships between master data.
Master-Master data are the critical nouns of a business and fall generally into four groupings: people, things, places, and concepts. Further categorizations within those groupings are called subject areas, domain areas, or entity types. For example, within people, there are customer, employee, and salesperson. Within things, there are product, part, store, and asset. Within concepts, there are things like contract, warrantee, and licenses. Finally, within places, there are office locations and geographic divisions. Some of these domain areas may be further divided. Customer may be further segmented, based on incentives and history. A company may have normal customers, as well as premiere and executive customers. Product may be further segmented by sector and industry. The requirements, life cycle, and CRUD cycle for a product in the Consumer Packaged Goods (CPG) sector is likely very different from those of the clothing industry. The granularity of domains is essentially determined by the magnitude of differences between the attributes of the entities within them.
Figure 1: Role of MDM in an organisation
Why Should Business Managers manage Master Data?
Because it is used by multiple applications, an error in master data can cause errors in all the applications that use it. For example, an incorrect address in the customer master might mean orders, bills, and marketing literature is all sent to the wrong address. Similarly, an incorrect price on an item master can be a marketing disaster, and an incorrect account number in an Account Master can lead to huge fines or even jail time for the CEO-a career-limiting move for the person who made the mistake!
Here is a typical master-data horror story: A credit-card customer moves from 2847 9th Block Siddhartha Extension. to 1001 11th Block Siddhartha Extension. The customer changed his billing address immediately, but did not receive a bill for several months. One day, the customer received a threatening phone call from the credit-card billing department, asking why the bill has not been paid. The customer verifies that they have the new address, and the billing department verifies that the address on file is 1001 11th Block Siddhartha Extension. The customer asks for a copy of the bill, to settle the account. After two more weeks without a bill, the customer calls back and finds the account has been turned over to a collection agency. This time, they find out that even though the address in the file was 1001 11th Block Siddhartha Extension, the billing address is 101 11th Block Siddhartha Extension. N. After a bunch of phone calls and letters between lawyers, the bill finally gets resolved and the credit-card company has lost a customer for life. In this case, the master copy of the data was accurate, but another copy of it was flawed. Master data must be both correct and consistent.
Even if the master data has no errors, few organizations have just one set of master data. Many companies grow through mergers and acquisitions. Each company you acquire comes with its own customer master, item master, and so forth. This would not be bad if you could just Union the new master data with your current master data, but unless the company you acquire is in a completely different business in a faraway country, there's a very good chance that some customers and products will appear in both sets of master data-usually, with different formats and different database keys. If both companies use the Dun & Bradstreet number or Social Security number as the customer identifier, discovering which customer records are for the same customer is a straightforward issue; but that seldom happens. In most cases, customer numbers and part numbers are assigned by the software that creates the master records, so the chances of the same customer or the same product having the same identifier in both databases is pretty remote. Item masters can be even harder to reconcile, if equivalent parts are purchased from different vendors with different vendor numbers.
Merging master lists together can be very difficult. The same customer may have different names, customer numbers, addresses, and phone numbers in different databases. For example, William Smith might appear as Bill Smith, Wm. Smith, and William Smithe. Normal database joins and searches will not be able to resolve these differences. A very sophisticated tool that understands nicknames, alternate spellings, and typing errors will be required. The tool will probably also have to recognize that different name variations can be resolved, if they all live at the same address or have the same phone number. While creating a clean master list can be a daunting challenge, there are many positive benefits or advantages to your bottom line from a common master list:
A single, consolidated bill saves money and improves customer satisfaction.
Sending the same marketing literature to a customer from multiple customers lists wastes money and irritates the customer.
Before you turn a customer account over to a collection agency, it would be good to know if they owe other parts of your company money or, more importantly, that they are another division's biggest customer.
Stocking the same item under different part numbers is not only a waste of money and shelf space, but can potentially lead to artificial shortages.
The recent movements toward SOA and SaaS make Master Data Management a critical issue. For example, if you create a single customer service that communicates through well-defined XML messages, you may think you have defined a single view of your customers. But if the same customer is stored in five databases with three different addresses and four different phone numbers, what will your customer service return? Similarly, if you decide to subscribe to a CRM service provided through SaaS, the service provider will need a list of customers for their database. Which one will you send them?
For all these reasons, maintaining a high-quality, consistent set of master data for your organization is rapidly becoming a necessity. The systems and processes required to maintain this data are known as Master Data Management.
MDM Processes
As discussed earlier, MDM is comprised of a mixture of business processes or applications, methods or tools. Figure 2 explains the typical processes involved during the various phases of the MDM program. The various processed of a MDM can be roughly divided into four types :
Identify & Analyse : This process involves identifying the source databases from where the data would be drawn, identifying all those systems who would be producing the data ( data producers ) and which all would be consuming the data ( data consumers ). After identification of all such sources, it analyses the data as per the current and the future requirements of the system.
Data Governance : This process involves setting in place the policies and procedures necessary for managing the data in a organised manner. It also identifies the infrastructure needs, if any, and involved proper modelling of the data so that they are linked to each other in some way or the other.
Implementation : The next step involves implementation of the MDM. It basically involves selecting the tool, capturing the data, integrating the data with the previously obtained values, and synchronizing it as per the current requirements.
Quality : The last step in the MDM involves quality check of the data that is being managed. The complete data is tested on various parameters such as accuracy, completeness, timeliness, etc. Once the quality is ensured, the resulting data is fit for use within the various systems.
Figure 2: Key MDM Processes
Key MDM Products
Table 1 provides the details about the few of the familiar Master Data Management products. As you can see, the major products in the industry are dominated by big players like IBM with its product IBM Infosphere, SAP with its flagship product SAP NetWeaver MDM Server, and Oracle with its MDM Suite. Other than that, we also have products from Talend, Kalido, Informatica and Tibco which are also used to a large extent by many companies, because of their Customer, Product and Organisation support availability.
S.No.
Vendor
Product
Domain Support (Main)
1
IBM
IBM Infosphere MDM Server for Product Information Management
Product/Item/Vendor
2
IBM
IBM Infosphere MDM Server for Customer Data
Customer
3
SAP
SAP NetWeaver MDM Server
Customer, Product
4
Oracle
Oracle MDM Suite
Customer, Product
5
Talend
Talend MDM(Open Source)
Customer, Product, Employee
6
Kalido
Kalido MDM
Customer, Organisation, Product
7
Informatica
Siperion Multi-Domain MDM Hub
Customer, Product
8
Tibco
Tibco Collaborative Information Manager
Customer, Product, Vendor
Table 1: Key MDM products available in market
Technical Aspects
Master Data Management is a combination of processes and technologies that enable the creation of a single system of record. A single record that provides a set of validated, universally recognized values, derived from the various sources that store similar information, and that has been reconciled and stored in a centralized hub to be used as a primary frame reference by all enterprise users and systems. It will feed complete, consistent and correct data back to applications and databases across an entire business. [1]
As per Oracle's whitepaper on MDM, the ideal information structure should be as shown in figure 3,
Figure 3: Ideal Information Architecture (Source: Oracle whitepaper)
This architecture unites the operational and analytical sides of the business. A true single view of the data is possible. Derived information on the analytical side of the business is made available to the real time processes using the operational applications and business process orchestration tools that run the business.
The key processes for any MDM system, shown in figure 4, are explained below,
Profile the master data
This is the first step in any MDM implementation. This means that, for each master data business entity to be managed centrally in a master data repository, all existing systems that create or update the master data must be assessed as to their data quality. Deviations from a desired data quality goal must be analyzed.
Consolidate the master data into a central repository and link it to all the applications
Without consolidating all the master data attributes, key management capabilities such as the creation of blended records from multiple trusted sources is not possible.
Govern the master data. Clean it up and manage it according to business rules
Data governance refers to the operating discipline for managing data and information as a key enterprise asset. This includes organization, processes and tools for establishing and exercising decision rights regarding valuation and management of data.
Share the data
For MDM to be effective, a modern SOA layer is needed to propagate the master data to the applications and expose the master data to the business processes.
Leverage the master data by supporting business intelligence systems and reporting
MDM creates a single version of the truth about every master data entity. This data feeds all operational and analytical systems across the enterprise. Key insights can be found from the master data store itself.
Figure 4: Key processes in MDM system
MDM Synchronization Approaches
Once the master records are created, there are five primary ways to manage master data throughout a business:
Consolidation
A single, physical instance of master data is created and maintained. Updates of master data are made at source systems, then transferred back to central repository.
Figure 5: Consolidation MDM Approach
Registry
All master data resides in its original databases, while a virtual repository of keys is maintained to aid in the synchronization of MDM records across all different information assets. When an update is made to master data at its source, it is harmonized with the associated key and then redistributed among other back-end systems.
Figure 6: Registry MDM Approach
Coexistence
All master data attributes are stored in a central repository, but the master data itself can be generated, stored, and updated either in the MDM database or within the individual applications and back-end systems. Changes, regardless of where they are made, are dynamically disseminated among related sources.
Figure 7: Coexistence MDM Approach
Transaction
Master data is read and written to a central repository in its transactional context, in real time upon event execution. Updates are also managed at the source and propagated to the MDM database.
Figure 8: Transaction MDM Approach
Data Synchronization
Asynchronous harmonization of master data dynamically occurs, in batch, among systems across the entire infrastructure. Updates are recorded at the source; then shared with other sources at pre-determined intervals.
There is no standard approach of master data management. A firm needs to evaluate its unique and specific master data management needs, such as,
Timeliness - the speed at which raw data must be available within the environment
Latency - the desired time for master data to be delivered when requested
Currency - the frequency at which master data should be refreshed
Consistency - the degree to which each application's view is like others across the enterprise, and/or the ability of the environment to return the same results each time data is requested, irrespective of the source queried
Versioning in MDM
Data governance and regulatory compliance are much easier with a complete version history of all changes to the master data. For example, it is often not enough to know what a customer's credit limit is today; you may need to know what his credit limit was three months ago, when the customer was charged a high interest rate for exceeding his limit.
Versioning can be accomplished through the following methods,
Link tables that link rows in a version table with a particular version of the MDM record. Figure 5 illustrates the same,
Figure 9: Versions with a link table
Add an "Effectivedate" column to each master-data row. When a master-data is modified, a new row is inserted with the modified date and time in the "Effectivedate" column.
Modify the master record in place and put the old version in a history table. The historical data can be stored on cheaper, slower disks - reducing the overall cost of the system.
The current version is stored, along with a log of the changes done to arrive at the current version. This method is more complex, but the amount of data needed to be stored is less.
Each of the above mentioned method has its own merits and demerits and an organization should consider various factors like cost, importance of previous versions, complexity etc. before implementing a particular versioning method.
Hierarchies
Relationships are a critical part of the master data, like products are sold by salesmen; employees work for managers; products are made from parts etc. MDM hierarchies should be named, discoverable, versioned, governed, and shared. Most hierarchies are implemented as link tables. If the data already contains relationships imported from the source systems, it generally makes sense to leave those relationships alone to maintain the fidelity between the MDM hub and the source system. But you may decide to convert them to hierarchies implemented as link tables to take advantage of the hierarchy-management features of the hub, as well as to provide a standard format for hierarchies.
Figure 10: Hierarchy Link Table
Data Quality and MDM
Though the efficiency in which master data is generated and maintained is important, it is the quality of the master data that defines the success of a MDM initiative. In order to make MDM a success, data quality and validation techniques need to be incorporated by firms.
Data quality is an issue in MDM as master data is collected from a number of disparate sources. If the source data is bad, then the master data generated from it would also be of bad quality. This in turn would have a ripple effect throughout the business as incorrect or outdated information is leveraged during the operations.
Also, the data quality and validation techniques should be more than the simple tools to scan data and uncover issues. Instead, they must use advanced business rules and quality control techniques to prevent bad data from entering the environment in the first place. This proactive data governance would ensure the quality of master data throughout the enterprise.
Architecture overview of MDM softwares
An overview of the high level architecture of some of the MDM softwares existing in the market is shown in this section. Three vendors have been considered - IBM, Oracle and SAP.
IBM
IBM InfoSphere Master Data Management Server for Product Information Management has a component-based architecture that can consist of a two-tier or three-tier configuration. The InfoSphere MDM Server for PIM components include: core components, integration components, and collaboration components.
With InfoSphere MDM Server for PIM, companies can manage, link, and synchronize item, location, organization, trading partner, and trade terms internally and externally. InfoSphere MDM Server for PIM provides the following PIM solutions:
A flexible, scalable repository that manages and links product, location, trading partner, organization, and terms-of-trade information.
Tools for modeling, capturing, creating, and managing this information with high user productivity and high information quality.
The ability to integrate and synchronize this information with legacy systems, enterprise applications, repositories, and masters.
Business user workflows for supporting multi-department and multi-enterprise business processes.
The ability to exchange and synchronize information externally with business partners.
Figure 11 displays the architecture,
Figure 11: IBM Infosphere architecture
Oracle
The major layers in the Oracle MDM architecture, shown in figure 12, are:
Oracle Fusion Middleware provides supporting infrastructure.
Application Integration Architecture links MDM data to applications and business processes.
The MDM Applications layer contains all the base pre-built MDM hubs and shared services.
The top layer includes MDM based solutions for Data governance.
Figure 12: Oracle's MDM software
SAP
The heart of the high-performance architecture of SAP NetWeaver MDM is its server. It is based on a database that stores SAP NetWeaver MDM repositories that combine and store master data information on customers, vendors, products, employees, or other objects. This in-memory technology keeps the data of MDM repositories in the main memory of the SAP NetWeaver MDM server, which enables fast searches and quick access to data.
The console, an administrative and data-modeling tool, defines and maintains the structure and properties of the repositories. An import server can automatically import master data from various source systems into the SAP NetWeaver MDM server. The server works with a syndication server to distribute the master data into the target systems that have been defined. The SAP NetWeaver Exchange Infrastructure (SAP NetWeaver XI) component functions as a central hub for data exchange.
SAP NetWeaver MDM supports several access concepts to manage the master data stored in the repositories. These concepts include several rich clients like the SAP NetWeaver MDM data manager, the SAP NetWeaver MDM import manager, the SAP NetWeaver MDM syndicator, and the SAP NetWeaver MDM console. A power user can create complex search queries, perform mass changes, and define validation rules and workflows within a repository.
Figure 13 displays the artchitecture,
Figure 13: SAP Netweaver MDM Server architecture
MDM in financial sector
Global financial markets have been growing at a rapid pace, fuelled by factors such as advancement in technology, deregulation, increasing wealth and demographic changes that have boosted the demand for various financial products. The industry has witnessed changes in a myriad of ways - introduction of more innovative financial instruments, different mechanisms for trade execution, and regulations that blur lines between industry sub-segments while enforcing stringent fiduciary responsibility. In addition, an emphasis on risk management, market volatility, investor scrutiny and the regulatory environment have increased the pressure to improve performance.
The Banking and Capital Markets industries have some unique challenges to overcome. This is an industry that is facing difficult times, and it is critical for these institutions to optimize their relationships with their customers in order to drive incremental revenues through up-sell and cross-sell opportunities and to reduce costs through higher IT agility.
As a result, financial organizations are at a crossroads. Industry leaders will define success by how they leverage data in order to make strategic decisions, while those who remain idle will lose competitive advantage. Maintaining data integrity and cleanliness requires a new level of discipline coupled with a business mandate. Data management can no longer solely be the responsibility of IT. The strategic impact crosses all organizational borders. The solution is Master Data Management (MDM). Several financial institutions have started leveraging MDM to augment home grown CIF systems, in order to achieve greater flexibility and reduce costs.
Why we need a Business Case?
The business case helps us to understand the problems encountered in the industry and using various frameworks/models and best practices to resolve those cases. The reasons for needing a business case to explain MDM are:
The need to deliver more business value from IT: Today IT is used to have competitive parity with other competitors or just to run the show. R&D and other capabilities form a small percentage of the total investments. The ideal allocation of the IT budget would be to spend roughly 55% on existing capability and 45% on new capabilities that create value for the business instead of current ratio of 70:30.
The business impact of bad data: The Data Warehousing Institute estimates that data quality problems cost U.S. businesses more than $600 billion a year. Yet, most executives are oblivious to the data quality issues that are slowly eroding the value of their organizations. Furthermore, the number 1 reason why CRM projects fail to deliver on the promised value is due to poor data quality, which leads to poor user adoption. Users will not use systems that do not provide them accurate information, and tend to continue using whichever tool they had before that helped them do their job.
Quantifying the business value: Now more than ever, organizations are required to demonstrate value from IT investments in order to get the initiatives prioritized against competing ones. The days where IT project decisions were based on Total Cost of Ownership (TCO) are gone. TCO alone cannot justify decisions where the business needs to see the value. In an economy where many initiatives are competing for the same funds, only the most compelling business cases will win. Technologists have to quantify the cost reduction, cost avoidance, and the impact to the top line.
MDM is a journey: A company's Master Data Management program should be an enterprise-wide initiative. However, it is often difficult to start the initiative across the entire enterprise. The key is to embark upon tactical projects that are aligned with an overall enterprise vision for MDM. Pick a starting point with limited scope that proves the technical approach and delivers faster business benefits. For example, the starting point could be mastering customer data from a limited number of systems within the enterprise. This helps put together the technical foundation of the hub and gain experience with limited but controlled data stewardship. Of course, care should be taken that even this limited project brings measurable ROI.
Business Cases
At a basic level, MDM seeks to ensure that an organization does not use multiple (potentially inconsistent) versions of the same master data in different parts of its operations, which can occur in large organizations. There are many business cases that could be illustrated to explain the importance of MDM. The business cases dealt below are specific to financial services industry:
Increasing profitability by aligning service levels to customer value: When service delivery personnel are confined to an account-centric view, they struggle to accurately measure customer value. Operationalize customer centricity to enable service delivery personnel to serve customers not accounts, accurately measure customer value, deliver more tailored service, and increase profitable customer loyalty.
Business Problems
Difficulty searching for and reconciling data to identify unique customers because of account-centric systems
Struggling to map each customer to all current accounts, products and services
Accurately measuring the value of each customer based on the profitability of their products and services
Using a slow, error-prone, manual process to piece together information for each customer
Root Cause
The root cause of these business problems is data. Customer and product data is created and updated by thousands of individuals in different lines of business and departments and stored separately in different formats in multiple systems across the institution. The inability to access reconciled and related data within domains and across domains makes it difficult to get a clear picture of customers, product hierarchies, and employees as well as the even more complex relationships between them.
Figure 14: Operational Barrier - Account-Centricity
Business Impact of the Problem
Unfortunately, business teams spend excessive time searching for and manually reconciling data in different formats in multiple systems instead of driving customer loyalty, profits and revenue.
IT Problems
Lack the ability to flexibly scale and quickly adapt to changing business needs
Lack the time, resources and budget to ensure all data is reconciled and related within and across data domains in all systems across the institution
Impeded by a non-scalable point-to-point integration environment and inflexible data stores
Solution
Operationalize customer centricity while maximizing investments in existing applications by implementing the Customer Centricity Solution, with the proven and flexible MDM at the core. Empower the Service Delivery team with desktop access to Siperian's Extended Customer View, which provides visibility into reconciled and related data within and across domains including:
Customer data
Product data and services data
Company data
Employee data
Figure 15: The extended customer view
Results
With desktop access to an Extended Customer View, the Service Delivery team can serve customers, not accounts, accurately measure customer value, deliver tailored services, and increase profits and customer loyalty by:
Easily identifying unique customers with a Single Customer View
Mapping customers to all of their products and services
Assigning product and service profitability metrics to each customer
Accurately measuring customer value
Gaining new insight into customer value based on valuable family, business and employee relationships
Aligning service levels with an accurate measure of customer value
Managing Risk: Leverage reconciled and related data to more accurately manage your current investments, the risks associated with those investments, and the outcomes of potential actions. Gain a clearer view of your holdings and positions on each instrument, the associated counterparties, and the relationships among all instruments and all counterparties.
Business Problems
May be inaccurately measuring risk exposure to all counterparties based on holdings and positions on financial instruments for end-of-day reports.
Over- or under-estimating capital reserve requirements based on inaccurate data.
Struggling to obtain an accurate picture of all holdings and positions on financial instruments and associated counterparties.
Difficulty accurately assessing and adjusting risk levels.
Root Cause
The root cause of these business problems is data. Customer, counterparty and financial instrument data is created and updated by thousands of individuals in different lines of business and departments and stored separately in different formats in multiple systems across the institution. The inability to access reconciled and related data within domains and across domains makes it difficult to get a clear picture of complex financial instrument and counterparty hierarchies, and the even more complex relationships between them.
A recent survey of Chief Risk Officers (CROs) at financial institutions revealed that many are inhibited in their ability to fully grasp their risk exposure due to outdated or poorly integrated IT systems.
Business Impact of the Problem
Unfortunately, risk managers spend excessive time searching for and manually reconciling data rather than focusing on aligning their organization to comply with government and industry regulations. Moreover, because of inaccurate and disorganized data, they struggle to obtain an accurate view of all the firm's holdings and positions on financial instruments. Since risk managers may not be 100 percent confident in the risk exposure measures included in end-of-day reports, they are more likely to over- or under-estimate capital reserve requirements.
IT Problems
Lack the ability to flexibly scale and quickly adapt to changing business needs.
Lack the time, resources and budget to ensure all data is reconciled and related within and across data domains in all systems across the institution.
Impeded by a non-scalable point-to-point integration environment and inflexible data stores.
Solution
Improve the ability to accurately assess and adjust risk levels. Overcome the challenges resulting from disparate data silos while maximizing investments in existing applications by implementing the proven and flexible MDM. Leverage reconciled and related financial reference data within and across domains that provides a clear view of:
Customer data
Hierarchies and relationships among all financial instruments
Hierarchies and relationships among all counterparties
Hierarchies and relationships between all financial instruments and counterparties
Current holdings and positions on each financial instrument
Results
Empower risk managers to leverage reconciled and related financial reference data within and across domains in real time to:
Gain accurate views of current holdings and positions on financial instruments and associated counterparties
Accurately measure risk exposure to all counterparties based on holdings and positions on financial instruments for end-of-day reports
Optimizing Capital Reserves: Improve capital efficiency and regulatory compliance by accurately measuring capital reserve requirements. Empower risk managers with desktop access to reconciled and related counterparty and financial instrument data in real-time and enable them to optimize capital reserves.
Business Problems
May be inaccurately measuring risk exposure based on holdings and positions on financial instruments for end-of-day reports
Struggling to obtain an accurate picture of all holdings and positions on financial instruments and associated counterparties
Difficulty assessing and adjusting risk levels and optimizing capital reserves
Root Cause
The root cause of these business problems is data. Customer, counterparty and financial instrument data is created and updated by thousands of individuals in different lines of business and departments and stored separately in different formats in multiple systems across the institution. The inability to access reconciled and related data within domains and across domains makes it difficult to get a clear picture of complex financial instrument and counterparty hierarchies, and the even more complex relationships between them.
Business Impact of the Problem
Unfortunately, risk managers spend excessive time searching for and manually reconciling data. They struggle to obtain an accurate view of all the firm's holdings and positions on financial instruments. Since these managers may not be 100 percent confident in the risk exposure measures included in end-of-day reports, they are more likely to over- or under-estimate capital reserve requirements.
IT Problems
Lack the ability to flexibly scale and quickly adapt to changing business needs
Lack the time, resources and budget to ensure all data is reconciled and related within and across data domains in all systems across the institution
Impeded by a non-scalable point-to-point integration environment and inflexible data stores
Solution
Improve your ability to accurately calculate capital reserves. Overcome the challenges of storing data in disparate data silos while maximizing investments in existing applications by implementing the proven and flexible MDM. Leverage reconciled and related financial reference data within and across domains that provides a clear view of:
Customer data
Hierarchies and relationships among all financial instruments
Hierarchies and relationships among all counterparties
Hierarchies and relationships between all financial instruments and counterparties
Current holdings and positions on each financial instrument
Results
Empower risk managers to leverage reconciled and related financial reference data within and across domains in real time to:
Gain accurate views of current holdings and positions on financial instruments and associated counterparties
Accurately measure risk exposure to all counterparties based on holdings and positions on financial instruments for end-of-day reports
Optimize capital reserves
Managerial Issues
Quality data is important to any organization. Businesses, agencies and other kinds of organizational structures rely on data to be up-to-date, accurate and relevant to the mission of the organization. For a financial services firm, accuracy of the data is sacrosanct.
Organizations that do not maintain strong quality in their data transactions typically learn that many problems can arise due to poor maintenance of data quality.
Issues such as -
Bad decision making
Failing to meet compliance directives
Lack of uniformed decision making
Error prone records
Costly error
Losing opportunity for especially marketing cross-sell & up-sell
As a solution to poor quality data, organizations should strive to develop good database design and data models. These should contain strong security methods built into them and network resources. While these are great first steps, they are not enough to ensure good data quality.
One of the best ways organizations can promote good data quality is to carefully plan integration of the information systems right from the beginning. As a part of this development, decision makers should consider including all user levels to be involved in the process.
The benefit to this is it allows both management and system developers get a full picture of the organization's business requirements and needs. The more complete the development process is, the better off data quality is over the long term. In addition, development should also consider potential expansion and future business growth.
Moreover if Basel II compliance (the regulation that requires banks to keep a three- to seven-year history of data) is used by managers to just manage it as a compliance project then the company won't get any business benefits later. So managers have to have long term vision on things like these.
Faster access to more comprehensive data sets allow for better trend analysis and forecasting decisions, and strengthen budgeting, reporting and accounting processes. For example, because financial services form generally has such a variety of businesses, that it carries a lot of risk-some easily visible, some not. So managers need to create an environment such that employees are able to understand the importance of this.
As the organization's members begin the development of an information system(s), these are some of the points that should be considered in the development process:
Accuracy of taxonomy
Precisely defined records
Precision in how data is recorded and the methodology used
Precise documentation, including updates, changes, additions or deletions
A well-defined method of transmitting data
After the system is established, the organization should perform data quality audits on a regular basis. In addition, data cleansing should be conducted periodically to allow for better data consistency. Engaging in these activities will increase data integrity and overall quality.
Poor data quality, which includes bad or incorrect data, will result in errors. An information system database plagued with inaccuracies has a much higher probability poor business decisions being made. Managers and employees who base their decisions on the information in error-laden databases are going to find themselves making decisions on wrong or inaccurate data.
Ultimately poor decisions can result in major financial loss for the organization, including but not limited to a recall of products. Striving hard to create good data quality in an organization is an important goal to work towards. Poor quality can result in problematic issues for the organization, stakeholders, consumers and the general public. It is in everyone's best interests that managers in organizations pay special attention to data quality.
Conclusion
The recent emphasis on regulatory compliance, SOA, and mergers and acquisitions has made the creating and maintaining of accurate and complete master data a business imperative. Both large and small financial services firm must develop data-maintenance and governance processes and procedures, to obtain and maintain accurate master data.
While it's easy to think of master-data management as a technological issue, a purely technological solution without corresponding changes to business processes and controls will likely fail to produce satisfactory results.
So for Companies undertaking Master Data Management, a hallmark of successful implementation would be reliance and integration of data governance throughout the initiative. Moreover developing a strong data governance will as well strengthen the ability to manage all enterprise information activities.