IIUG Conference 2018

Looking forward to meeting in Arlington Virginia just outside of Washington DC next week!

Tuesday, October 23, 2018

IIUG World 2018

https://lnkd.in/d6siP7Y I will be presenting two sessions at the 2018 IIUG World Conference in Arlington, Virginia outside Washington DC next week. The first session is "Hello! Your data is calling!" where I will be presenting about the new "Smart Triggers" feature of Informix which allows users to create applications that passively receive notification when data in a registered table is modified. The second session is "Uninterruptable transactions with Informix!" in which I will demonstrate the Informix Transaction Survival feature that will allow active transactions to survive the crash of the primary server in a high availability or remote secondary cluster environment.

Tuesday, April 17, 2018

Looks like Oracle is worried about Informix again!

OMG, it looks like Oracle is beginning to worry about Informix! Why would I say that? Isn't Oracle "The world's most popular database!"? Why should they worry?

I dunno, but in February Oracle announced v18c (actually v12.2 but read on). This latest release of Oracle implements tons of features that have been in Informix for up to 28 years! The need to compete with Informix on features again seems to have surfaced!

Not to tout Oracle, but to point out how forward thinking the Informix development team is, witness:

  • Annual major releases with quarterly updates. To celebrate they have renamed v12.2 to be v18c after the year of its release. One assumes that the Q1 release next year will be v19c.
  • Ability to attach a table to another table as a partition - was that Informix 7.31?
  • All identifiers have been increased from 30 bytes to 128 bytes - Informix v7.30 circa 1998
  • Multitenancy
  • JSON support including dot notation in queries
    • Functions for converting table data to JSON
    • JSON operators in SQL queries
    • New API to allow JSON aware languages to query Oracle JSON documents
    • Ability to update fields within JSON documents
  • RAC based Database Sharding - well sort of. A shared RAC database can have its data segregated so that nodes only operate on a subset of the data. But it is still a monolithic store.
  • NonRAC based Database Sharding - well sort of. Applications must be shard aware. Inserts, updates, and deletes are directed to the appropriate shard in the API layer, not within the database server shard cluster.
  • "Connection Manager" enhanced to manage load balancing between multiple servers. Originally this was closer to Informix Connection Multiplexer feature.
  • Application Continuity - This is similar to Informix's Transaction Survival allowing transactions to complete when a server fails.
  • Session private temporary tables - Informix v4.01 the initial release of Informix Online circa 1990?
  • New Oracle autonomic features to allow for unmanaged cloud databases.

Monday, March 5, 2018

What are reasons to migrate away from Informix?

OK, I haven't lost my mind. I have been active on a social media site called Quora where people ask questions on just about any topic. Some are technical, some are social, some are just annoying. Recently someone asked the question above. Eric Vercelletto posted one answer and I posted another. Here is an expanded version of my response for your amusement.

The single most common reason to migrate away from Informix is FUD. Other RDBMS vendors have been spreading the FUD that Informix is dead legacy technology since the IBM acquisition in 2001 and that has resulted in many shops moving away from an RDBMS that had successfully supported their organizations for many years. Unfortunately IBM was one of the worst offenders. IBM sales people seeing dollar signs in their vision, if they sold new DB2 licenses instead of an Informix license renewal at 20% of the price, told customers the same thing "Don't stick with Informix it is going away!". That’s the main “reason”.

What are reasons to NOT migrate away from Informix? Well, the truth is that Informix is not a dead legacy technology. In the 16 years since IBM bought Informix the database from Informix Corp. IBM has made more improvements and innovations to the product than Informix itself had in the prior 18 years since the company started. Today Informix is still the best OLTP engine on the market. The Informix Warehouse Accelerator feature handles complex data warehouse queries against massive data marts several hundred times faster than any RDBMS, even faster than its sister product DB2 Blue Accelerator. Informix is the only RDBMS that can not only handle semi-structured JSON/BSON data as native data but can seamlessly integrate MongoDB style collections with relational tables and serve as a plug-in replacement for MongoDB extending that API with access to relational tables (as JSON results), generic SQL statements, and working reliable ACID transactions that include relational and JSON data today (Mongo promises to have its brand new ACID transaction feature ready next year). Informix timeseries support is as good as any dedicated timeseries database with the added benefit that Informix’s Virtual Table Interface allows one to present timeseries data to analytical tools that do not understand timeseries as if it were a flat relational table. Informix was the first RDBMS to fully support scale out sharding to create huge distributed databases spread across dozens or hundreds of physical servers build over heterogeneous technologies from different vendors.

On to other topics

Since taking over development and marketing of Informix and a number of other IBM software products last year, HCL has been more concerned with getting its ducks in a row and so little new or exciting has come out of HCL's Informix development group. Version 12.10.xC9, the first full release developed completely under HCL's auspices, did introduce smart triggers which is a good thing.

That said, version 12.10.xC10 actually included several new features and improvements:

  • added compression of blobspace BLOB data, 
  • backup to cloud storage services, 
  • the ability to swap primary and mirror chunks which can allow us to upgrade a server's storage to faster devices with no downtime, 
  • new sysmaster data and onstat options to help identify unused and little used indexes, 
  • the ability to retain data distributions and other statistics when truncating a table (useful for staging tables that are truncated and reloaded periodically), 
  • the ability to reconnect to a smart trigger session without missing data when an application exits and restarts, 
  • expanded audit details about sessions and users, 
  • added compression of string fields in timeseries subtypes, 
  • expanded geodetic data support for additional coordinate systems (x,y coordinates; GPS; industry specific coordinates).

There are two updates to the IBM online Knowledge Center pages about Smart Triggers:

Generic information about how to register for Smart Trigger Push Data:


JDBC API for Smart Triggers:


Other than this the online Knowledge Center pages have also lagged. Also there has not been a new set of Informix PDF documentation since 12.10.xC8. I do put the onus for the documentation SNAFU firmly on the shoulders of IBM, though, not HCL because after transferring responsibility for the software to HCL IBM disbanded the Informix documentation team without informing HCL that they might want to take that effort over as well. I think that this issue is in the process of being resolved.

Big Announcement

Over this past weekend, HCL announced that Informix instances are now available on AWS, the Amazon Cloud. There are dozens of different configurations that will cost from pennies an hour to six dollar figures per annum to meet the needs of organizations from startups to large enterprises. If you are considering moving your databases to the clouds, there are now alternatives to the IBM cloud for Informix users. HCL promises other major cloud providers will become available soon.

Monday, October 30, 2017

New features in Informix 12.10.xC9

I am stoked! HCL has released the first update to Informix Dynamic Server developed under their auspices. There are two significant new features in the .xC9 release, one is an enhancement to how time series and spatial data work together that adds significant performance and utility to that feature and the other is a completely new feature that users have been asking for for years. I will cover them in order:

Track Location and Time Together

STS_SubtrackCreate() function which creates the "subtrack" table over which the spatiotemporal  index is created. That done the STS_SubtrackBuild() function populates the subtract table and builds the initial index contents. Spatiotemporal indexes are relatively static, however you can configure the Timeseries such that it automatically updates the index when data elements are added to the Timeseries record.

The biggest change in the .xC9 release is improved time granularity of the spatiotemporal data making it easier to search and locate allowing new searches that answer "When was an object in a specified area?" "What objects were in this area at this time?" "At what time were there some objects in this area?"

Applications to Receive Asynchronous Notice of Data Changes 

Client applications can now create Smart Triggers that register them to receive notification when there are changes in a data set. The API uses SELECT statements and WHERE clauses to identify which specific data each application is interested in, and an application can register to receive push data from multiple source tables. Once registered the server will push new and modified data from the server to those clients interested in those specific rows. 

Because the client applications do not have to poll the server looking for new data, greater scaling and responsiveness is achieved by those apps. At the same time the database server's parallel architecture can feed the data to all clients by asynchronously reading logical log file changes. This design lets client applications scale linearly without adding significant overhead to the database server. Since the changes are scraped from the logical logs asynchronous to the session threads actually modifying the data, by using the Enterprise Replication log scraping threads, there is no performance effect on the OLTP applications that are making the changes to the database.

Previously you might have emulated this behavior using insert, update, and delete triggers on the tables that called to a C or JAVA library function, but the process of trapping the trigger and sending data synchronously would slow down the front-end transactions causing potentially serious scaling and concurrency problems for applications.Registering is fairly straight forward and is documented in the Enterprise Replication Guide. Basically you call a registration function passing in a BSON record containing fields defining the table you are registering for, a SELECT statement with an appropriate WHERE clause to filter the rows you are interested in, a label you want to use to distinguish data blocks from one table from those originating from another, a timeout setting, the number of elements you want to receive in each message, how many messages to allow to queue up, and the earliest transaction time you want to receive data updates from.I am particularly stoked about this one because I think it will be key to the success of a new project I am working on for a client. Perhaps when it is all finished I will be able to get permission to talk about it.

Going Forward

There has been a long gap since my last post. I apologize. I have not been ignoring the community nor have I given up on Informix as some have suggested and gone off to do other things. On the contrary, the main reason for my silence is that the past year has been my busiest in a long time. If I ever entertained fears that Informix is dying this year would have put those to bed for me for sure. I have seen one client upgrade their servers to the latest releases three times. Two clients implement IWA successfully. One going from Proof of Concept to production in four months and in the process saving over a million dollars over the alternative free "open source" solution while exceeding the predicted performance the "other" solution promised! The other implemented a new vertical product for the industry that it serves that will allow their customers to perform more detailed analytics in less time with fewer resources.

I have spent several months helping another client expand their use of Informix throughout their organization through which effort they were able to improve the timeliness and reliability of the services they provide to their customers and so to their customers customers among whom are counted many reading this post (including me). So, you are welcome. 

One of my current projects, mentioned in passing above, is very different for me and has me excited because it is allowing me to do some database design. That's one of the more fun things I do. Recently performance tuning, installations, training, and feature implementation have taken up my time and I haven't had the opportunity to work on a design project in a while. Enjoying the change. 

IIUG Update

I just returned from the IIUG Board of Directors fall meeting. This year we met for the first time formally with HCL executives and development management. Some of you may have been dismayed by the Editorial in the recent IIUG Insider. I have to report that the feedback from HCL in response to Gary's concerns was overall heartening. It seems that mostly HCL didn't think it was important to keep the IIUG informed. There are new things in the queue that the Board members will be able to discuss publicly as soon as some outside hindrances to roll out are overcome. There is exciting news pending for market segments that previously could not take advantage of Informix. That's about all I can say for now, but hopefully the news will be released in time for the next Insider. Although Murphy is probably working hard to cause it to miss Gary's deadline by a day B^(

Stay tuned there is lots happening in the Informix world. And don't forget to start working on getting permission to attend the IIUG World 2018 conference in Washington DC in October 2018!

Wednesday, May 3, 2017

A new era for Informix begins now!

I returned from last week's International Informix Users Group Conference with some news. What kind of news it is I am not 100% certain. I am certain that it is BIG news. I am certain that it affects most of us in one way or another. I am not certain whether it is good news or bad news for the Informix user community or same-old same-old. I have been saying since I heard it that I am guardedly optimistic that this is good news.
For those of you who have not heard, IBM and a company called HCL have partnered to "jointly develop and market the Informix family of products". IBM has licensed the intellectual property rights for all Informix products to HCL for a period of at least 15 years with options for renewal. As part of the deal IBM will retain ownership but HCL will be responsible for developing the products and for tech support. Indeed HCL will be hiring all Informix developers and support personnel who are willing to make the change. (My understanding is that so far all US Informix developers and support people have agree to sign up and tentative offers have been made to the Informix folks in Europe and Asia pending government paperwork in those areas.) HCL will also be free to market and sell Informix as well as to develop "derivative products". Decisions about the product life cycle and roadmap will be made by IBM and HCL together.
What does this mean for existing customers? Immediately not much. You will still be able to renew your support and purchase additional Informix licenses from IBM. You will still call the same support phone lines or use the same online support portal. The technicians responding will simply be HCL employees and likely the same people you have been dealing with all along.
This is a link to a LinkedIn post by Mattias Funke Director Core Database & Data Warehouse Offering Management & Strategy at IBM explaining the deal:  https://goo.gl/CY4nyO
HCL is a global consulting and IT software group headquartered in India with locations throughout the world. Their Product and Platform business unit which was started in September is based in New York and will manage the Informix products in addition to their own product lines which concentrate on IOT, embedded systems, and cloud computing. These are all strengths of Informix and explains HCLs interest in the partnership. 
In discussions with their executives at the IIUG Conference we were told that HCL is very interested in focusing their own products on working with Informix and in making Informix THE player in the database market. 
IBM's internal politics have always prevented IBM from fully embracing Informix for the market leading database that it is and this has encouraged much of the FUD intimating that Informix is a dead or dying product. HCL has no such constraints on it. Someone said at the Conference "the gloves are off!" We may even see advertising about the benefits and features of Informix. I am hopeful. The Informix community will be waiting to see how HCL follows up on these opportunities and promises and how this changes the marketplace for Informix.

Friday, July 15, 2016

Some Very Cool Things Are Happening in the Informix World

I read two interesting things this week. One was an award that Informix won and the other is a White Paper Bloor put out. Both reference Informix as a platform for IoT or Internet of Things data storage and processing.

First, on July 14, 2016 at the annual Cisco Live conference in Las Vegas Nevada, Cisco presented IBM with an award naming IBM Informix as the "Best IoT Database" on the market today. I know this is no surprise to you, but this is the biggest third party acknowledgement of Informix's role in a major company's product and market strategy. Most of the businesses that depend on Informix for the competitive edge it gives them over their competitors who use other RDBMS and non-SQL databases are not willing to talk about it. This is a coup for Informix and a great boon for the Informix user community. Here's a link to the annoucement on the IIUG Site:

The other item is the Bloor Report which describes IBM Informix as the perfect database for IoT installations "regardless of where (in the IoT pipeline) it needs to be deployed". Among other things, Bloor's Philip Howard wrote:

[It] is essential that any embedded database is invisible and remains that way. This is true regardless of whether you are simply collecting data and passing it on or whether you are performing some analytics on the data. In the latter case, in order to get good performance, you need, at least in
conventional environments, to create indexes, materialised views and other such database constructs in order to achieve that performance. While this is feasible it is not flexible in the event that additional requirements need to be supported. Every time you add functionality within the device or gateway you will need to change the supported indexes. Worse, different workloads may mean that different indexes, materialised tables and so forth will be differently suitable for different customers. Moreover, these workloads may change over time. What this will mean is that the database will need to be tuned on an ongoing basis in order to maintain performance, which is impractical in IoT environments. For all of these reasons a traditional relational database will not be suitable for embedding at the device or gateway level, precisely because these all require exactly this sort of tuning. Fortunately, this is not the case with IBM Informix because the product includes self-healing and self-tuning autonomics that handle these embedded environment requirements automatically. Secondly, there are some elements of database flexibility that need to be discussed with specific respect to IoT environments. Support for things like geo-spatial and time series data we will discuss later. In the context of flexibility, you must bear in mind that an IoT implementation may consist of multiple types of devices and gateways doing different things. Moreover, the sort of data you are collecting and processing may change over time. For both of these reasons a database that supports a flexible schema will be preferable and as a result of these considerations IBM Informix supports JSON (where each data object has its own schema) as first class objects within the database.
Depending on where (an IoT database) is implemented there will be rather different requirements. However, in our opinion IBM Informix is well-suited to IoT regardless of where it needs to be deployed. At the device and gateway level the product has a long-standing reputation as a “fire and forget” database that can be easily installed and maintained while, in the centre, it has the sort of capabilities and performance characteristics that suit hybrid operational/analytic environments. On top of this, native time series and geo-spatial support are requirements for many IoT use cases, so IBM Informix is well-placed in this market.

Very cool indeed! Here's a link to the full Bloor Report:

Friday, May 8, 2015

LVARCHAR not long enough?

I've got some news for you!  If you had been to the IIUG 2015 Conference last week (April 26-30 2015) you might already know this.  If not, relax, 'cause I was there.

Question: Have you ever had a requirement for a character type that can hold strings longer than the 32K limit on the LVARCHAR type?  Don't want to have to deal with the hassles of storing and retrieving a BLOB or SLOB?

Welcome to Informix version 12.10!  I'm not sure in which sub-release this was actually implemented, but it definitely exists in .xC4 and later.  There is a new extended type, longlvarchar which is limited to 2GB strings!  Strings under 4K are stored in-table like ordinary LVARCHAR values, but longer strings are moved, invisibly, to a SmartBLOB Space.  So, you can now declare:

create table big_stuff (
       really_long_string longlvarchar,

Very cool!  This is a short one, I know, but it's been a busy week getting back into the normal swing of days after returning from San Diego, Mother's Day is ahead, and a short trip to visit a client next week, so I'll have to fill you all in on other things later.  TTFN.

Friday, March 20, 2015

News from the IIUG

Here's a link to an announcement from the IIUG Board of Directors talking about this year's IIUG 2015 Conference celebrating the 20th Anniversary Year of the IIUG:

IIUG News March 20, 2015

Twenty years!  Wow.  Who knew when the founding members of the IIUG, Lester Knutsen, Carlton Doe, Walt Hultgren, Malcolm Weallans, and Cathy Kipp sat down to plan a new Users' Group for customers of a small database company who's products were just beginning to hit their stride that their idea would grow into the singular most influential users' group at IT giant IBM!

Are you planning to attend the IIUG 2015 Conference in San Diego on April 26-30?  If so, have you registered and booked your room yet?

If not, why not?  IIUG 2013 was at the same location in San Diego, California and we all loved it.  The staff took very good care of us, the food was excellent (how could it not be with Stuart Litel planning the menu - talk about the ultimate foodie!), the sessions were great as were the tutorials on the last day, hands-on labs with IBM Informix legends to guide us and let us play with features we don't get to use in our daily grind, and we got to speak with IBM executives and developers and learn about the future of Informix!  IIUG 2014 Conference in Miami Florida was just as good.

Best of all, we get to do it all again in five weeks at IIUG 2015!

If you don't come this year, not only will you miss the best source of Informix information and networking in the world, but, besides all that, you will miss my secret announcement!  I will be announcing a new product at IIUG 2015 that will give your organization a leg up on its competition! If you are not there, you may not learn about it until months later and will have missed the one and only opportunity to Beta test!  Want a hint?  OK, just one in the form of a question: Could your organization benefit from being able to get its analytics completed faster and at a FAR lower cost than anything you may have or contemplate purchasing? DUH!

See you in San Diego!

Wednesday, November 19, 2014

On Hybrid Database Development

At the recent IBM Insight conference in Las Vegas I presented a session entitled: 

"My Data is Relational But My Coders Want to Use JSON!  Help!"

Over 30 development managers and developers attended.  When asked most said they attended because this was a common problem that they are all either dealing with already or expect to have to deal with soon.  

Here, as well as I can translate a presentation to text, is what I had to say:

I assert that JSON and other semi-structured data formats present unique challenges for organizations. In order to take advantage of these new structures and the development paradigms that they support and encourage and integrate them into our existing systems, we wrestle with three alternatives:

  1. Keep these new data and the apps that utilize them independent of our legacy data and keep our legacy apps independent of this new data
  2. Perform ETL/ELT frequently to maintain all of this data in both relational and semi-structured databases.
  3. Develop applications that can access multiple data sources concurrently.
I maintain that none of these alternatives is acceptable and I want to propose a better path to future application and database development. First a digression to define terms which you can feel free to skip.

There are many classes of data:

  • Structure data
  • Unstructured data
  • Semi-structured data
  • Time stamped data composed from any data class
  • Geographically located data composed from any data class
  • Spatially located data composed from any data class
Structured Data refers to the data we have in relational databases for the most part. Despite the predictions of pundits relational databases are here to stay.  There are several well defined application segments for which relational databases are the best tool for the job.

Semi-Structured Data is data that may have a natural structure to it but where the structure may vary from one data element to another.  This can also include elements where part of the information is structured while other sub-elements are unstructured.

Currently the most popular format for semi-structured data is known as JSON for JavaScript Object Notation.  JSON is a key:value record or document format that is quickly becoming a new standard for data interchange replacing CSV, XML, and other interchange formats.  JSON is the basis for the RESTFUL web service protocol.  It is more space efficient than XML and easier to parse.  In addition, it is supported by a binary format, BSON, that is even faster to parse and extract individual sub-elements.  JSON is a dynamic schema format where each record, or document in JSON terminology, contains the schema details for the document which can be different from one document to another.  This promotes and supports RAD or Rapid Application Development.

MongoDB has become the most popular of the JSON databases or stores.  It saves JSON documents in BSON format and produces JSON output to consuming clients.  There are several sets of interoperable development tool sets known as "stacks" the most popular of which is call the MEAN stack as well as client libraries for most common languages.

MEAN stands for MongoDB, ExpressJS, AngularJS, and Node.js.

Class time over:

On to the problem.  We all have TBs of relational data.  Our experienced developers already know SQL and our schema. Some new development still fits with structured data. There is no sense in moving OLTP or structured DW data out of relational systems.  There is no financially supportable way to rewrite all of our existing systems from scratch just to take advantage of new paradigms.  Many new applications written primarily to use semi-structured data will need to present the relational data we've been collecting for the past 30+ years.

Many of us are capturing TBs of semi-structured data already. New developers that we want to hire only want to work in the NoSQL space.  Some new applications are natural fits for semi-structured data. RAD techniques require schema flexibility to succeed.  Many new and existing relational based applications could take advantage of all of that semi-structured data to expose new features without a massive redevelopment effort.

How to get relational data into MEAN stack applications and JSON data into C/C++/C# applications is the problem.  Traditionally when faced with data in multiple silos we would either use ETL techniques to copy the data from one silo to another or we would write data access layers using multiple libraries.  

ETL is time consuming and error prone.  We may have to sacrifice precision or internal relationships in the target system. Maintaining a timely, consistent, view is a major constant effort and expense.  Data duplication is itself an expense.  Fast storage is not cheap!

We can't just move everything into MongoDB either.  MongoDB like most NoSQL stores do not support multiple operation transactions, inter-object relationships (references and sub-documents are not relationships), XA transactions (so they cannot participate in transactions across multiple silos).  

The solution:

The solution is Hybrid Application Development using a Hybrid Database system using both traditional and RAD development tools.  Here is what is needed from a hybrid database for this to work:
  • Access from traditional development stacks (C, C++, C#, Perl, etc.)
  • Access from MEAN and other RAD development stacks
  • Full ACID compliant transaction support for all data
  • Full relational integrity and data normalization support for all data 
  • Ability to store structure data with a predefined schema
  • Ability to store semi-structured data with a dynamic schema
  • Ability to present both types of data to both stacks in the native format of the data
    • JSON as JSON
    • Tables as tables
  • Ability to present both types of data to each stack in its native format
    • JSON as table data for traditional tools
    • Table data as JSON for RAD tools
Some NoSQL stores can handle ACID consistency but most can only manage "eventually consistent" transactions.  Most, like MongoDB, can only guarantee consistent transactions and rollbacks for a single document, not for multiple documents in a collection nor for transactions that span multiple collections. Some NoSQL stores can support SQL or an SQL-like query language, however, they do not return data as rows and columns which is what is expected by SQL database access code. 

Some RDBMS systems can:
  • Store JSON (or a binary form of it)
  • Store a collection as a collection (rather than as a column type)
  • Manipulate JSON fields within a document
  • Create indexes on JSON fields within the documents in a collection
  • Support referential integrity between JSON document fields and relational columns
  • Join JSON documents to relational tables
But most cannot do it all.  I believe that these capabilities are the future of database and application development. Today there is one RDBMS product that CAN do it all:
  • Accept connections from MEAN stack and other MongoDB clients without modifying the application code
  • Accept connections from relational SQL clients
  • Support transactions on relation tables and JSON collections that include multiple tables/collections and multiple rows/documents
  • Allow joins between JSON collections and other JSON collections or relational tables
  • Enhance MEAN applications to support issuing SQL statements and return data in JSON format for those clients
  • Permit SQL clients to treat fields in JSON collection documents as ordinary columns
  • Support JSON and BSON as class 1 native data types for use as columns in relational tables or as a type to define a collection
  • Direct RESTFUL interface to expose all data as web services without middleware
  • Store JSON collections as collections
  • Permit any MongoDB aware client to treat relational tables collections
  • Support most MongoDB DDL and server management commands (including sharding)
  • Auto create databases and collections just as MongoDB does, on the fly
I know I'm sounding like a salesman, but I have nothing to sell you other than an idea and my help to achieve it in your organizations, so, bear with me.  What is this magical database system?  It is the latest incarnation of one of the first RDBMS products in the marketplace:

Informix Dynamic Server v12.10.xC4 from IBM

Informix, aka IDS, has all of the required features I mention above but it brings much more to the table:
  • Larger documents than MongoDB
  • Larger databases than MongoDB
  • Industry leading data replication technology
  • Four classes of data replication that can all work together
  • More capable time proven sharding technology than MongoDB
  • Data distribution and centralization capability
  • Support for Timeseries data
  • Support for Geospatial data
  • The only database system that can combine Timeseries and Geospatial data to track your IoT data through SpaceTime
  • User defined data types that perform as well as native types
  • World class OLTP transaction rates
  • Traditional and MEAN stack applications can connect and use both JSON and SQL data in their own native formats
  • Centralized databases up to 128PB
  • Distributed databases without size limits
  • Server fail over and load balancing fully configurable using SLA specifications
  • Five 9's reliability
  • Near Zero downtime
  • Upgrade server versions without downtime
  • Instantly bring up additional servers to load balance during peak periods
  • Support large numbers of concurrent transactions
  • Row level locking (MongoDB locks entire collections only)
  • Extremely tunable engine
  • Autonomic features for low maintenance overhead and to permit unmonitored operations
  • Highly embeddable
  • Optional compression of data and index keys
  • Advanced query optimizer
  • Multiple tenancy
  • Informix Warehouse Accelerator returns data up to 1200X faster than the base server for complex queries over huge data sets
  • Timeseries in JSON
  • GeoJSON support in addition to native GeoSpacial support
  • Lucene text search on JSON documents and relational tables
  • High speed data loading technology for faster loads of streaming data
  • MQTT integration to link to IoT devices

Wrap up:

The future of software development will require hybrid applications!
The future of software development will require hybrid databases!

The future is here now!
Why wait?

Monday, November 3, 2014

China is standardizing all new database applications on Informix!

Here's the "world" shaking news I promised. IBM and China have come to an agreement whereby IBM will share the code for Informix with a Chinese software house, GBASE, who will modify its security code to conform to Chinese government standards. In turn China will be building all future database projects using Informix.

The agreement allows for sharing innovations to the Informix code made by either party to be shared by both. If anyone wanted proof that Informix is here to stay and is NEVER going away, this is it. China is the world's second largest market and probably the fastest growing technology market. The government of China does not want any software or hardware used for government projects or enterprises that it does not control, fearing Western spying through the systems it uses. So, they are standardizing on Informix. Cool!

IBM Press release dated: October 29, 2014

  GBASE and IBM to collaborate on locally innovated database in China 

IBM (NYSE: IBM) and General Data Technology Co.,Ltd, known as GBASE announces an agreement today, to create a locally innovated database product for the China marketplace. In support of the China government's national technology agenda, GBASE will develop and sell this local innovation based on IBM Informix technology. The agreement extends the reach of IBM’s Informix database technology into this market by providing a localized solution to better address the rapidly evolving needs of Chinese businesses. The agreement permits GBASE to build and license their own version of IBM Informix database, which will be offered to clients in the Chinese market as a stand alone product solution. The China market for database technology is estimated to be in excess of $700m according to IDC. The partnership between IBM and GBASE can fuel the growth of this market by creating a best in class database solution tailored to the unique requirements of the China marketplace. “This agreement confirms IBM’s innovation and commitment to growth in emerging markets in general and China specifically”, states Sean Poulley, Vice President, IBM Databases and Database Management. "Our intent is to help partners and clients gain competitive edge and transform their business with innovative database technologies in China, with China, for China”. Informix is one of the world’s most widely used databases, supporting clients who range from the largest multinational corporations to many regional and local small businesses. It is widely deployed in the retail, banking, manufacturing and government segments, all of which are important growth areas in China today. Informix is well known for its innovative design, which enables a single platform that powers both OLTP and OLAP workloads for real-time analytics, scales easily for cloud environments and provides continuous availability. It's renowned for extremely high levels of performance and availability, distinctive capabilities in data replication and scalability, and minimal administrative overhead. With this partnership agreement these advantages will be made more readily available to the fast-growing Chinese market.