IBM i (AS/400) Database Modernization

IBM DB2 Database Modernization

IBM i DatabaseThe IBM i has its foundations based on the architecture originally developed on the IBM System 38 and later evolved into the IBM AS/400, IBM System i and currently marketed under the name IBM i.

In the original architecture the definition of the DB2 database was based on DDS (data definition specifications) to define physical files, logical files (similar to SQL views) and display files,

This approach is still used in many IBM i installations, but IBM has invited people to move towards an SQL approach that has become a standard for the industry.

IBM’s Dan Cruikshank has produced overview of the “Introduction to Database Modernization Course OD20” course where he describes  how to convert from a DDS files model to a SQL DDL model.

The Video on the Database Modernization


IBM video: Introduction to database modernization

IBM developerWorks : IBM’s resource for developers and IT …


Be the first to comment - What do you think?
Posted by Mario1 - 20/01/2017 at 1:33 pm

Categories: Database   Tags:

Types of DBMS Products

DBMS Products

I recently noticed on the TechTarget website an interesting article on the different types of DBMS products and I have re-published it below for your convenience,

Evaluating the different types of DBMS products


There is a lot of interest at the moment about the different types of DBMS and, if you are interested in this subject, you could find good information also on good boos such as the followings:




Postgres NOSQL Extensions
SQL, NoSQL or NewSQL Databases
SQL or NoSQL Databases?

Be the first to comment - What do you think?
Posted by Mario1 - 23/01/2015 at 1:38 pm

Categories: Database   Tags:

Postgres NoSQL Extensions

Postgres with NoSQL

Postgres DatabaseRecently there has been an increasing interest in NoSQL databases (e.g. Hadoop, MongoDB) which appear as a valid alternative to SQL databases when you have to deal with large volumes of unstructured data.

An interesting addition to the Postgres open source database has been the support of the JSON data-exchange format that is used in many NoSQL databases.

I found an interesting article on the PCWorld website about Postgres and NoSQL and I have re-published it below for your convenience.

New PostgreSQL guns for NoSQL market

The first beta version of PostgreSQL 9.4, released Thursday, includes a number of new features that address the rapidly growing market for Web applications, many of which require fast storage and retrieval of large amounts of user data.Typically, users have gone to NoSQL databases, which were designed for such workloads, though the community of developers behind PostgreSQL is updating their database to meet these requirements as well.

In particular, PostgreSQL 9.4 natively supports JSON (JavaScript Simple Object Notation) which is quickly becoming the format of choice for sharing data across different systems, often using the REST (Representational State Transfer) protocol. The success of the MongoDB document database has been built in large part on the growing use of JSON.

PostgreSQL’s structured format for saving JSON, called JSONB, eliminates the need for restructuring a document before it is committed to the database.

This gives PostgreSQL the speed to ingest documents as quickly as MongoDB, while still maintaining compliance with ACID (atomicity, consistency, isolation, durability), a set of properties required for reliably storing data in databases. PostgreSQL also provides a full set of indexing services, functions and operators for manipulating JSON data.

Prior versions of PostGreSQL supported JSON, but stored JSON documents in a text format, which takes longer to store and retrieve.

In addition to native JSON support, PostgreSQL also comes with a number of other new features.

It has a new API (application programming interface) for decoding data from a replication stream, paving the way for third-party software providers to build more responsive replication systems.

A new Materialized Views function, called “refresh concurrently,” allows summary reports to be updated on the fly.

Using the new Alter System Set function, administrators can now modify the PostgreSQL configuration file directly from the SQL command line.

Other new features include the introduction of dynamic background workers, array manipulation and table functions, and general performance improvements.

PostgreSQL is the second most-widely-used open-source database on the market, trailing MySQL. At least some users have migrated from MySQL to PostgreSQL since MySQL was acquired by Oracle in its 2010 purchase of Sun Microsystems.

Like PostgreSQL, MySQL has been retrofitted to handle NoSQL workloads.

EnterpriseDB offers a commercially backed distribution of the open-source database.



Postgres SQL Overview
IBM and EnterpriseDB Postgres Plus
IBM Interest in EnterpriseDB Postgres






Be the first to comment - What do you think?
Posted by Mario1 - 23/12/2014 at 1:28 pm

Categories: Database   Tags:

Database Modernization in IBM i

Database Modernization

I noticed an interesting article on Database Modernization in IBM i recently published by the IBM Systems Magazine and I have re-published it below for your convenience.

Database Modernization Exposes the Value in IBM i

When IBM released the AS/400* system in 1988, a rush of new software products flooded onto the market. Vendors worked to port their applications; businesses engaged developers to enable, extend and create new applications to provide them with a competitive advantage; and the processes that defined the corporate vision were automated and embedded into the software.

If your tables are created with DDS, then your applications can’t take advantage of all the features and performance the DB2 database now offers

It was an exciting time of new capabilities and creativity. However, applications eventually mature to the point where the newness wears off and the focus of effort is limited to occasional maintenance or repair. As new business requirements arise, the necessary changes are often very different from the original design. These changes are usually made in the most expedient way possible—not in a way that strengthens or improves the core system.

One of the value propositions of IBM i and its predecessors is the capability to protect the customer’s investment in software. The constant pressure to “do more with less” combined with the “if it isn’t broken, don’t mess with it” mindset has led to a large portfolio of applications that are difficult to maintain. Their inherent design and a backlog of requests force the business user to create workarounds to accomplish necessary functions.

The applications—and by association, the hardware—are commonly perceived as having a lesser value because of their “legacy” label. The challenge we’re now faced with is how to change the perception of our systems when we’re given the opportunity to remedy what has been neglected for so long.

Database Modernization

The overarching goal should be to quickly achieve the maximum benefits from the modernization effort while minimizing the manual effort and impact to the existing programs. Database modernization, another method for modernizing your applications, is also worth considering. Bad data with an awesome UI is still bad data.

If your tables are created with DDS, then your applications can’t take advantage of the features and performance the DB2* database now offers. With DDS tables, changes to your data model are more difficult than they should be. I/O operations take longer. In some applications, the columns in the tables are not being used according to the original design. To obtain actionable information, any usage of the data must also include the knowledge of those who built and modified the application. Any of these scenarios are sufficient justification for a modernization effort.

A modern database has descriptive table and column names, referential integrity, business rules enforced by the database engine, and the capability to be accessed and maintained with standard tools.

The SQL Data Definition Language (DDL) enables all of these characteristics on IBM i. The challenge is to change the database from DDS to DDL and minimize the impact on applications. The modernization process has been documented in numerous IBM Redbooks* publications and papers. DDL syntax can reproduce most DDS described files exactly, including the File Level Identifier that programs use to detect whether a file layout has been changed since the program was compiled.

The GENDDL utility, provided as part of the OS, will translate DDS into DDL. This provides you with a starting point to begin your modernization. Because some DDS keywords don’t have an exact DDL replacement, it’s up to the developer to alter the generated DDL until the desired results have been achieved. The goal is to retain the File Level identifier that your programs are expecting wherever possible. Long table and column names can be added to your existing field names with no impact to your existing programs.

The key to successfully modernizing your database is to create a roadmap that includes these seven milestones:

  1. Identify the candidates for modernization.
  2. Determine the changes to be made to the tables.
  3. Document and analyze the impact to your programs based upon the changes to be implemented.
  4. Create the new tables and programs to be modified in a separate environment and validate the File Level Identifier.
  5. Evaluate your data against the new tables in order to prepare and cleanse your data as necessary.
  6. Create and execute a testing strategy that can be automated and repeated.
  7. Build your implementation process.

Immediate Benefits

The steps in this approach promote a solid base on which to build a modernization strategy. Converting your DDS to DDL provides great benefits and immediate results. Because DDL tables validate the data against the data types before the write operation occurs, your table won’t allow invalid data to be written to it. In addition, because validation is no longer performed on a read operation, subsequent access to the data will be faster. After manually performing a DDS-to-DDL conversion a few times, you’ll realize that this process can, and should be, automated to dramatically reduce risk and effort required and to deliver a consistent result. A tools-based approach will let you communicate the value of these efforts more effectively and enables more success.

When your tables are defined with DDL, more modernization options are available, such as referential integrity, business rules, creating a data service layer, encoded vector indexes, materialized query tables and more. It will be important to carefully plan your next steps to determine the impact they’ll have on your applications. For example, before enabling referential integrity, a thorough impact analysis is needed to identify how your tables are related and properly assess how your programs will react to any errors generated by the database when the constraints are enforced.

The analysis can also be used to discover business rules within the RPG code that would be better implemented by the database as additional constraints. You’ll find numerous opportunities to simplify and reduce the overall code base from this exercise alone. While this effort may seem overwhelming, tools are available to automate these activities and allow you to concentrate on transforming your application.

A Modern Strategy

You’ll discover many benefits to modernizing your database—improved performance, enhanced security, improved ability to audit and trace data, and a solid base for a successful application and UI modernization initiative. The best way to change the perception of your applications is to demonstrate how well you can execute your modernization strategy.



IBM ISeries AS/400 SQL Performance

Be the first to comment - What do you think?
Posted by Mario1 - 04/10/2014 at 2:13 pm

Categories: AS/400 Software, Database   Tags: ,

An Interesting Database Design Course

An Interesting Database Design Course


I noticed an article about a Database Design Course based on Postgres, I think that it offers a good opportunity to learn more about Postgres and Database Design and I have therefore copied below for your attention.

Learn Database Design using PostgreSQL

The Ultimate Guide to master the world most Advance Open Source Database

PostgreSQL is arguably the most advance and powerful opensource enterprise class relational database system. It is the object relational database system and provides the most standard compliant system for the Database designers. It provides the complete support for reliable transactions that is (ACID complaint) where ACID stands for Atomicity, Consistency, Isolation and Durability.

Its advance underlying technology makes it extremely powerful and programmable. Support for concurrency is one of its key feature. It is one of the most important technology you will learn and will greatly affect the way you work with Databases. It is the ultimate RDBM system which will allow you to create complex web apps which works flawlessly even for very large number of users.

PostgreSQL is the fastest growing RDBMS and with a large and thriving community it is a great asset to learn this amazing technology.

Our course will teach you this complex system in the easiest of ways. It will start with basic intro to RDBMS system with a emphasis on PostgreSQL. We will also discuss the main differences between MySQL and PostGreSQL the two most popular open source RDBMS. By the end of this course you will be able to…

Use PostgreSQL in your projects

Understand the important features of Postgres

Understand the Object Relational model

Master SQL which can be used across DB systems

Build actual web apps using PostgreSQL

So hop on and be the master of one of the hottest RDBM system

You can find the enrolment form in the original article published by

Be the first to comment - What do you think?
Posted by Mario1 - 31/08/2014 at 3:29 pm

Categories: Database   Tags:

Introduction to Hadoop and Big Data

What is Hadoop?

Image representing Hadoop as depicted in Crunc...

Image via CrunchBase


Hadoop is an open source Apache project originally inspired by some papers published by Google outlining its approach to handling an avalanche of data, often referred to as Big Data.

Hadoop has since become ta kind of standard for storing, processing and analyzing hundreds of terabytes, and even petabytes of data. It is based on a free Java-based programming framework that supports the processing of large data sets in a distributed computing environment.

In today’s highly connected world  more and more data are being created every day

The Hadoop datanase pioneered a fundamentally new way of storing and processing structured and unstructured data.instead of  different proprietory database systems and large hardware, Hadoop enables distributed parallel processing of huge amounts of data across inexpensive, industry-standard servers that both store and process the data, and can scale without limits.

Relational database process data structured in rows and columns. Hadoop has the target to process any kind of data such as log files, pictures, audio files, communications records, emails and more , regardless of its native format


The Hadoop Database Structure

The Hadoop project includes these modules:

  • Hadoop Common: The common utilities that support the other Hadoop modules.
  • Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.
  • Hadoop YARN: A framework for job scheduling and cluster resource management.
  • Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.
A multi-node Hadoop cluster

A multi-node Hadoop cluster (Photo credit: Wikipedia)

In order to scale, the HDFS  uses multiple independent Namenodes/Namespaces.

HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks. The blocks of a file are replicated for fault tolerance

An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients,

The NameNode and DataNode are pieces of software written in Java designed to run on commodity machines. They typically run a GNU/Linux operating system (OS).  Any machine that supports Java can run the NameNode or the DataNode software.

The Namenodes are federated, that is, the Namenodes are independent and don’t require coordination with each other. The datanodes are used as common storage for blocks by all the Namenodes.

Each datanode registers with all the Namenodes in the cluster. Datanodes send periodic heartbeats and block reports and handles commands from the Namenodes.

The MapReduce new architecture, divides the two major functions of the JobTracker: resource management and job life-cycle management into separate components.

The new ResourceManager manages the global assignment of compute resources to applications and the per-application ApplicationMaster manages the application scheduling and coordination.

An application is either a single job in the sense of classic MapReduce jobs or a DAG of such jobs.

The ResourceManager and per-machine NodeManager daemon, which manages the user processes on that machine, form the computation fabric.

The video below by helps to clarify the structure of Hadoop.

A Video about the History of Hadoop

The video below is part of an interesting series of Hadoop tutorial videospublished on YouTube by for the ZeroToPro Training company.



Enterprises are looking at using Hadoop to upgrade their traditional data warehouse. Compared to traditional data warehouse solutions, Hadoop can scale using commodity hardware and can be used to store both structured as well as unstructured data

Hadoop is “extremely efficient” in processing large volumes of structured and unstructured data, but it does this with high latencies. It is suitable for supporting ad-hoc queries on a conventional data warehouse, but cannot replace Relational Databases that are surely much more efficient for a transaction oriented application.

However it is surely desirable to access data in Hadoop with SQL.  Various initiatives are underway both in the open source as well as in various companies to solve the problem of enabling SQL on Hadoop. Some solutions are Hive, Impala, BigSQL, GoogleBigQuery.

The ability to use SQL to analyze data stored in Hadoop will help to take Hadoop to the mainstream. This will also enable business users to reuse their existing SQL knowledge to analyze the data store in Hadoop,

If you wish to learn more about hadoop, you could find many good books from Amazon such as the following from Amazon UK.




Hadoop and Big Data

Hadoop to complement, not eliminate, relational databases

Be the first to comment - What do you think?
Posted by Mario1 - 16/01/2014 at 3:29 pm

Categories: Database   Tags:

SQL, NoSQL or NewSQL Databases

SQL Databases


English: SQL CLR internal architecture diagram...

English: SQL CLR internal architecture diagram ???(??)â?¬: SQL CLR ??? (Photo credit: Wikipedia)

SQL Databases have had great success in the past for online transactional systems and SQL has become a popular standard to access data.

These databases enforce also the ACID  (Atomicity, Consistency, Isolation, Durability) properties and therefore relieve the application programmer from any consideration about them. Jim Gray defined these to be the properties of a reliable transaction system in the late 1970s and developed technologies to achieve them automatically. In 1983, Andreas Reuter and Theo Härder coined the acronym ACID to describe them.

However databases with big data are becoming more common (e.g the database supporting Facebook) and SQL is considered too slow to handle the large volume of data and large number of transactions,

Scalability and performance issues with relational databases became commonplace and somebody  suggested to use NoSQL databases and to give up the ACID properties.


NoSQL Databases

NoSQL is a catch-all name for different kinds of database architectures — key-value stores, document databases, column family databases and graph databases.

Each of them has it’s own relative advantages and disadvantages. However, in order to get scalability and performance, NoSQL databases give up “queryability” (i.e. they cannot use SQL) and usually ACID transactions. Some NoSQL DBs still support ACID like RavenDB (


NewSQL Databases

English: Computer science researcher Michael S...

English: Computer science researcher Michael Stonebraker just after giving a talk at the University of California, Berkeley (306 Soda Hall, HP Auditorium) on “Task-specific Search”. (Photo credit: Wikipedia)

NewSQL Databases are an alternative to NoSQL Databases proposed by Michael Stonebraker, a legendary computer scientist at MIT who specializes in database systems and is considered to be the forefather of big data. Stonebraker developed INGRES, which helped pioneer the use of relational databases, and has formed nine companies related to database technologies.

Stonerebraker proposed a new type of database that offers high performance and scalability without giving up SQL and ACID transactions.

Some key points pointed out by Stonebraker are:

  • SQL is good.
  • Traditional databases are slow not because SQL is slow. It’s because of their architecture and the fact that they are running code that is 30 years old. He found that about 95% of the processing time is due to overheads such as locking the records, taking the data into the buffers etc.
  • NewSQL provides performance and scalability while preserving SQL and ACID transactions by using a new architecture that drastically reduces overhead.


A good example of NewSQL database is VoltDB which is an open source database developed by a company with the same name founded by Stonebraker.

Some of the performance figures of VoltDB are pretty amazing:

  • 3 million transactions per second on a “couple hundred cores”
  • 45x the performance of “a SQL vendor who’s name has more than three letters and less than nine”
  • 5-6 times faster than Cassandra and same speed as Memcached on key-value operations


A Video on OldSQL vs. NoSQL vs. NewSQL for New OLTP

The video was published on Jun 29, 2012 by Dr.Michael Stonebraker


NoSQL Databases Explained

10 things you should know about NoSQL databases

Big Data(bases): Making Sense of OldSQL vs NoSQL vs NewSQL



Enhanced by Zemanta

Be the first to comment - What do you think?
Posted by Mario1 - 27/12/2013 at 5:16 pm

Categories: Database   Tags: ,

Free DB2 Education

Free DB2 Education

I have received an interesting mail from the DBTA about some Free DB2 Education and I have re-published it below for your convenience.



The DB2Night Show Educate Inform Entertain DBI logo, Visit
The DB2Night Show™ Season #5
New Show Schedule – FREE DB2 Education!

Smile! Guest(s) —– Show Topic —–
Click for Details & Registration
O/S Date & Time
Berni Schiefer, IBM Berni Schiefer, IBM Performance Experiences and Best Practices with DB2 BLU LUW 6 SEP 2013, 10am CDT
Kent Collins Kent Collins DB2 BLU, V10.5, PureScale, & JSON with Sr. Database Consultant Kent Collins LUW 20 SEP 2013, 10am CDT
Dan Luksetich Dan Luksetich, DanL OLAP Queries for Powerful Reporting and Analytics zOS 27 SEP 2013, 10am CDT
Guy Lohman, IBM Guy Lohman, IBM Intimate Details on DB2 BLU with Master Inventor Guy Lohman, IBM LUW 4 OCT 2013, 10am CDT
Scott Hayes, DBI Scott Hayes, DBI Feelin’ BLU – Beyond the Marketing Literature with Scott Hayes LUW 18 OCT 2013, 10am CDT
Klaas Brant, KBCE Klaas Brant, KBCE Dynamic SQL, Pros and Cons zOS 25 OCT 2013, 10am CDT
Lee Goddard, DBI Lee Goddard, DBI 17 Laws of building Petabyte Level DB and Big Data Systems LUW 1 NOV 2013, 10am CDT


JUST ANNOUNCED – Season #4 Top Replays!
Top 25 DB2 for LUW Replays :: Top 25 DB2 for z/OS ReplaysThere have been over 347,000 downloads of free REPLAYS since September 2009! That’s over $16M of free DB2 Education offered to the DB2 Community!
No other independent software company does more to help
the DB2 community grow and prosper…
DBI logo, Visit

How to Tune DB2 LUW in a Minute!


Join DBI for a free webinar on 27 September 2013 at 1pm CDT and learn how to:

  • Improve Performance in Minutes!
  • Automatically track database changes!
  • View important performance trends!
  • Compare Database and SQL workload performance across timeframes!
  • Discover the most costly Users and their SQL!
  • In one mouse click, determine if a performance issue belongs to the database or not!
  • And More!

Details and RegistrationALL
DBI Events

DBI database performance products for:   DB2 LUW  |  Oracle  |  SQL Server  |  About DBI

DBI Softwareis the recognized leader in database performance tuning and optimization. Our unique products enable DBAs to adopt a proactive methodology – escaping the trap of simple “reactive” tools and hardware upgrades.
For more information visit
or call toll-free 1-866-773-8789 (outside USA: +1-512-249-2324).

DB2 is a registered trademark of IBM. All other trademarks belong to their respective owners.




Another way to improve your expertise and knowledge about the IBM DB2 database is to read some books on this subject.  You can easily find them at Amazon.


Some exmples from Amazon UK are the following:





Be the first to comment - What do you think?
Posted by Mario1 - 29/08/2013 at 2:45 pm

Categories: Database   Tags:

MongoDB and DB2 Integration

MongoDB and DB2 Integration


IBM i (Photo credit: Wikipedia)

MongoDB (from “humongous”) which is part of the NoSQL family of database systems is an open source document-oriented database system developed and supported by 10gen.

Recently I read an interesting article on the ITJungle website about the MongoDB and DB2 Integration and I have re-published it below for your convenience

DB2 LUW To Get MongoDB Hooks–Will DB2/400 Be Next?

Published: June 10, 2013

by Alex Woodie

IBM and 10gen, the company behind the open-source MongoDB NoSQL database, announced a partnership last week that will lead to closer integration between MongoDB and DB2 for Linux, Unix, and Windows (LUW), as well as WebSphere middleware. The work will allow developers to build compelling Web and mobile applications on DB2 that utilize NoSQL storage and query concepts. The question for IBM i shops is whether Big Blue sees fit to add the same capabilities to the DB2 for i database that is integrated with the operating system.

MongoDB is the most popular NoSQL database, which is gaining popularity as a way to store massive (or humongous) amounts of unstructured documents and make them easily accessible to developers working with the latest HTML5 and JavaScript tools. Developers pull documents from MongoDB by writing BSON, a binary version of JSON (JavaScript Object Notation) that adds the capability to create dynamic schemas. NoSQL databases like MongDB don’t enforce relational database constructs like SQL and tables, and are hailed for their capability to store and access rich N-dimensional documents, and for better human readability.

IBM is going to support the MongoDB query language and the BSON wire protocol with DB2 and the elastic, in-memory WebSphere eXtreme Scale data grid platform. IBM says it will work with 10gen and others to create “standards” that will allow developers to query JSON documents stored in DB2 for LUW.

This will give applications that were written to run against MongoDB the capability to run against existing DB2 for LUW stores, opening up a vast array of corporate data to the latest Web and mobile applications. IBM says developers will be able to leverage the new standards from their standard Eclipse and Worklight Studio environments (the latter being a tool for mobile app development), and that customers may begin using the new technology by the third quarter of 2013.

“Through its support of MongoDB, IBM is marrying the database world with the new world of mobile app development,” says Jerry Cuomo, chief technology officer for WebSphere and an IBM Fellow. “Now, millions of developers will be able to deliver new, engaging enterprise apps that leverage the vast data resources managed by organizations around the world.”

Mongo Good

MongoDB has been adopted by the likes of SourceForge and Craigslist to store and archive vast archives of Web documents. SAP also uses MongoDB for its Java platform as a service (PaaS) offering. Google uses a NoSQL backend (not MongoDB) for its Google App Engine (GAE) platform as a service PaaS cloud offering, too, and other cloud providers have similar NoSQL data stores.

Hooking up with MongoDB is undeniably a smart move for IBM, which trails Oracle 11g and MySQL, Microsoft SQL Server, and PostgreSQL in the database race, and is only two places ahead of seventh-place MongoDB on the DB-Engines Ranking.

The question for IBM i shops is whether IBM is planning to extend the standards work it is undertaking with 10gen and MongoDB to DB2 for i, or DB2/400 as it is still colloquially known. An IBM spokesperson said the company “may consider this capability for additional platforms, but have nothing to announce at this time.”

DB2 for i shares some components with DB2 for LUW, as well as with DB2 for z/OS, the implementation for IBM’s System z mainframe. It is unclear how much additional work it would take to support the BSON line protocol and the MongoDB query language in DB2 for i.

There is definitely a precedent to expanding DB2 for i to support additional databases, and it is called MySQL. Way back in 2007IBM worked with MySQL to support the open source database on the System i server, and to enabling DB2/400 to function as one of the storage engines for MySQL. This work has been stymied a bit by Oracle, which acquired MySQL via Sun Microsystems in 2010, and subsequently dropped support for running MySQL on IBM i less than a year later.

The IBM i version of MySQL lives on as DBi, a project managed by PHP backer Zend Technologies. Because so many PHP apps were written to use MySQL, keeping a version of MySQL running on the IBM i platform was deemed important enough to undertake the project. Zend’s DBi, which became available a year ago, is managed through a partnership with Percona.

The success that Zend and IBM have had with running PHP/MySQL apps on the IBM i server shows that there is demand among IBM i shops for an alternative to the traditional stack of applications written in RPG and COBOL (and Java to a lesser extent) and storing relational data in the straight-laced DB2 for i. The Web and mobile development worlds are evolving quickly, and NoSQL databases like MongoDB are important components of those worlds.

The IBM i platform could use the injection of new blood that a MongoDB hook could bring. Consider that DB2/400 and DB2 for i are not even listed among the 170-plus data stores listed on DB-Engines. (For what it’s worth, DB2 for z/OS isn’t either.)

IBM i shops today are adopting third-party middleware products that present DB2 for i data using the latest HTML5, JavaScript, and CSS technologies. There is an obvious demand for tools to build modern looking Web and mobile applications that run on IBM i and utilize the vast stores of data kept in DB2 for i. IBM has taken the first step to supporting the new generation of MongoDB apps on its DB2 for LUW database. Whether it will do the same for IBM i depends on whether IBM i shops want it, and how loudly they ask IBM for it.



AWS/400: Amazon Builds An AS/400-oid Cloud

Zend DBi Goes GA

Oracle Drops MySQL Support for IBM i

MySQL Database Getting Closer Ties to the System i



Enhanced by ZemantaIf you want to learn more about NoSQL databases and MongoDB you could consider reading some of the books presented in the article below.

Be the first to comment - What do you think?
Posted by Mario1 - 24/06/2013 at 2:22 pm

Categories: AS/400 Software, Database   Tags:

Enterprise NoSQL Databases

NoSQL Databases


noSQL-6.jpg (Photo credit: Marc Poppleton)

I noticed recently an interesting article about Enterprise NoSQL Databases published on the Database Trends and Applications website and I have re-published it below for your convenience.

Why Enterprise NoSQL Matters

“The good, old-fashioned relational database, a well-understood technology with a known list of providers for two decades, has faced disruption since the turn of the millennium, and the disruption is peaking now. The rise of cloud applications, big data analytics, mobile computing, sophisticated content and asset management solutions, and social media have pushed the once dependable relational database to the edge of, and sometimes past, its abilities.

Because it was originally architected to work with hardware from yesteryear, the older relational database may struggle to take optimal advantage of the dramatic price/performance improvements and innovations found in adjunct technologies like multi-core processors and storage. Add in previously inconceivable requirements for scalability, plus the still substantial margins enjoyed by long-standing enterprise database providers, and modern database buyers have found motivation to look for fresh alternatives.

In the vacuum formed between older databases and new use cases, an explosion of roughly 50 new commercial and open source databases, often referred to collectively as “NoSQL databases,” have come to market. The Enterprise Strategy Group prefers to interpret the term “NoSQL” to mean “Not Only SQL” given that many of the new databases do support Structured Query Language (SQL). But suffice it to say that pent-up demand to better address post-2000 use cases has produced a throng of new database choices. Yet therein lies another, ironic, challenge for the database buyer: too much choice. Fortunately, if you require enterprise-class features in a NoSQL database, the number of choices shrinks to a few. Read how MarkLogic stands out as a clear leader in the “Enterprise NoSQL” category.”



Enhanced by Zemanta

Be the first to comment - What do you think?
Posted by Mario1 - 11/06/2013 at 2:33 pm

Categories: Database   Tags:

Next Page »