Feature Article: May 2014
The exponential growth of data, thanks in part to the Internet of Things, means that deriving value from Big Data is no easy task. Combine this with the tendency for organizations to store large data sets in multiple locations around the world, and effective data integration becomes a significant challenge.
Content developers face integration challenges quite frequently. A good example of this is online video providers who manage large subscription databases. For companies who stream video, user satisfaction is a top of mind business priority. Viewers expect content to be easily accessible at all times and from multiple devices and all of these factors need to be effectively managed on the back-end by developers. For example, important data sets unique to individual users such as, profile information, device preferences, recent history and session data, must all be stored, managed and delivered across multiple platforms and locations. These intricate, and sometimes contradictory, requirements can be difficult to manage without proper integration. Unfortunately, disparate data sets often translate into poor user experience, which can lead to angry customers and revenue loss.
This is only one example that depicts some of the many moving parts of data integration that need to be addressed to effectively manage data cross-regionally. Additionally, certain types of Personally Identifiable Information (PII), which includes sensitive information such as, credit card numbers, names and addresses, are all pieces of the subscription management puzzle that often carry additional regulatory requirements, making them particularly challenging.
Data has gotten bigger, but the world has gotten smaller
Today, most enterprises have some type of global component to business transactions, oftentimes with employees, partners and customers around the world. A few decades ago all of an organization’s data was typically stored in one location, making it easier to access and control information. Given the nature of business today, with remote users and increasing regulatory controls over data location, the method of keeping data on a single, centralized server is antiquated and no longer meets business needs.
As companies continue to expand into new markets, many government entities are supporting increased regulations in regards to data privacy and security, particularly as it relates to data originating in country. Regulations vary by country, but many of them have specific requirements that PII data remains in the country of origin. With regulations such as this, policies must be implemented and maintained within businesses to ensure they are keeping with compliance. Unfortunately, many companies are forced to choose between two less than ideal options – either store data where it’s most convenient and risk non-compliance, or establish separate data stores by region.
Since storing data outside of compliance can result in serious legal and regulatory implications, and keeping data in separate regions requires constant synchronization that can prevent real-time access, a new approach to data management is needed.
By Frank Huerta
Luckily, new technologies for database management are available. One approach gaining popularity is an integrated policy-based data management system that eliminates the challenges outlined above. This is accomplished via automatic data synchronization based on location policies and provides a 360-degree view of data at all times. This type of approach takes advantage of a “scale-out” architecture in which data stores can be increased by adding identical ‘nodes’, rather than needing to install a whole new data center. This ability to scale within a single location, or cross-regionally, practically eliminates the manual labor typically associated with such activities and provides a more streamlined, automated process. Using a data integration solution that allows an organization to preserve existing infrastructure and simultaneously addresses location compliance policy, can have significant business benefits such as reduced costs and time saved.
Frank Huerta is CEO and co-founder of TransLattice, where he is responsible for the vision and strategic direction of the company. Prior to TransLattice, he was co-founder and CEO of Recourse Technologies. Recourse was purchased by Symantec Corporation, and Mr. Huerta then served as a vice president at Symantec. Previously, he was the director of business development for Exodus Communications where he focused on mergers and acquisitions. Mr. Huerta also held positions at VeriFone, Seagate Software and Hughes Aircraft.
About the Author:
Nodes can run alongside existing database systems and may also be deployed in remote locations to enable PII data to remain in the country of origin.
The ability to keep existing infrastructure and add data management nodes permits companies to run a data integration solution alongside current databases. Deploying the data in this way means that organizations can identify which information needs to be kept region-specific. When specific transactions are finished, the majority of the data is stored as usual, with accompanying region-specific portions stored only on the node in the appropriate region. This is highly valuable for a company looking to expand operations into a country with new data location regulations. Rather than having to establish an entirely new data store, a node can be strategically placed to match fluctuating business needs. The individual nodes cross-regionally form a geographically distributed fabric providing data visibility in real-time.
As organizations look for ways to leverage the economics of becoming a global institution, it’s clear that conventional database solutions don’t effectively provide the flexibility needed. Maintaining superior data control and providing high-availability of the data for employees, partners and users is no easy feat. Content developers and IT administrators require new solutions that grant all parties access to the most up-to-date data, when and where it’s needed. This is the future of data integration. New approaches, like the one described above, truly enable businesses to deliver content that is consistent and readily accessible, while maintaining a reliable and cost-effective infrastructure.
This 2015 article by Craig Mullins is a part of a multi-part series on database systems from TechTarget.
|What is a Database?|
|The History and Future of Database Change Management|
|Fixing Corrupt Microsost Access Databases|
|How to Work Remotely and Still Be The Best|
|Getting in Touch with Big Data|
|Planning for Effective Data Warehouse Testing|
|Social Data Has Become Social Big Data|
|The Future of Data Centers: Achieving Agility in a Rapidly Shifting World|
|Here’s a News Year’s Resolution: Master Your Database|
|Making the Grade: Cost Savings Upgrades for Today's Data Center|
|How to Choose the Best DBA for Your Company|
|Virtualization: Wading Through the Deluge of Data|
|SQL Databases and Network Attached Storage|
|Why Big Data Needs Cloud|
|Ten reasons why you should use data models to build apps|
|Beware Big Schema|
|How to Implement Successful Data Integration Cross-Regionally|
|Forging a Path Beyond Hadoop - Software Database Mgmt Sys for Big Data Analytics|
|Database Tips and Tricks|
|Why Data Still Matters|