Data Gravity is a concept first coined by Dave McCrory to describe the tendency of data to attract more data, applications and services.  As you may have guessed from the name, this principle has many parallels to Newton’s Theory of Universal Gravitation.

 

The basic premise is that a singular piece of data isn’t meaningful but with more data (metadata as it’s generally called) additional context (and therefore more meaning) can be derived.  When all that data is bundled with more applications and services, one can harness a considerable amount of power as evidenced by today’s trends towards data and analytics.

 

For example, let’s take this piece of data: 0.  Although we know what zero means conceptually, we don’t have any way to determine how we should feel about it.  Without any context just knowing the number 0 is essentially worthless.  If, for instance you were to know the additional data of “inventory of toy Elmos” + [insert where you live] you’d know that you need to make an only order and ASAP.

 

Data for larger institutions is just like this, except the concept of “friction” plays a larger role.  Data Friction refers to the challenge in moving data once it’s stored in one location.  That friction becomes stronger the more data is stored (similar to inertia).  Once you have your data stored somewhere, clustered with data points that make it meaningful, it’s unlikely that you’d move it to another location.

 

Dave McCrory does an excellent job connecting this theory of Data Gravity to larger scientific concepts so I highly recommend reading the links in the “learn more” section.

 

 

Want to learn more?  Check out these sources:

 

 

Welcome to Terminology Tuesday.  Our weekly blog   What would you like to hear defined?  Let us know in the comments or, as always, reach out to @vmwarensx.