Experience and customer feedback tell us that working with mainframe data sets on Hadoop and Apache Spark isn't easy, but the amazing array of customer and transactional data sets stored on the mainframe makes it crucial for successful big data analytics and machine learning initiatives.
That's why it's time to liberate your mainframe data and drive better business insights.
First, simply interpreting the mainframe data itself:
Next, mainframes are renowned for their security giving rise to legitimate concerns about Hadoop data security:
Finally, governance and compliance:
There's only one tool that can handle these problems with ease and it comes from Syncsort – an industry leader in mainframe software for over 45 years, and now a well-established leader in the Hadoop ecosystem. Syncsort DMX-h bridges the gap between mainframe and Hadoop providing:
There's no need to hire or train new developers with specialised skills in Cobol, MapReduce or Spark. The DMX-h graphical development environment allows you to easily cleanse, blend and transform mainframe with other legacy and Big Data sources for better business insights – no coding required. You can even ingest hundreds of mainframe DB2 database tables into Hadoop at one time with the DMX Data Funnel capability.
DMX-h already offers high-performance Change Data Capture (CDC) capability for Hadoop, but some customers would like to move only changed mainframe records across the network to Hadoop – rather than the whole data set after each update. That's why Syncsort will soon be adding the ability to do CDC directly on the mainframe – significantly reducing the volume of data which needs to be transferred across the network to Hadoop.
Learn more about Syncsort DMX-h and our Mainframe Access and Integration for Hadoop Solution.