The PostgreSQL community announced last week that an important security update will be released on April 4, 2013. This release will include a fix for a high-exposure security vulnerability and all users are strongly urged to apply the update as soon as it is available. Knowing how disruptive urgent security updates can be to IT and developers, the PostgreSQL community issued advanced warning in the hopes that it would ease the impact to day-to-day operations while helping as many companies as possible to adopt the update quickly.
As such, we would like to take the time to remind us all how important these security updates are to your business, and how to apply them most efficiently for vFabric Postgres.
The Cost of Missing Security Updates
Maintenance and security software updates are essential in extending application longevity as well as in keeping the confidence of customers who use services based on the application.
When big data disasters hit, the impacts quickly move beyond financial and affect reputation and trust. Databases are a particular area of concern. A recent article titled, “Making Database Security Your No. 1 2013 Resolution,” cited a Verizon study that showed only 10 percent of total security spend goes into database protection, while 92 percent of stolen data comes out of databases.
According to the seventh annual U.S. Cost of a Data Breach report from Ponemon Institute, the cost of an average data breach was $5.5M in 2011 or $194 per record. While $5.5M may not sound like a lot to some companies, losing one million records at a cost of $194 per record adds up. Continue reading →
Especially in today’s world, security is top of mind for app developers, DBAs, and CIOs alike. One of the benefits that VMware strives to include in every product is a system of reasonable defaults for security. This generally means that users should expect a reasonably secure middleware application when they deploy a VMware app by default.
vFabric Postgres (vPostgres) is no different. There are not that many security settings in vFabric Postgres. However, there are a few things you can look at as options to further harden your deployment, and of course, the virtual machine that you are deploying them on, particularly if it is exposed to an external environment.
SSL Connection Restrictions
vFabric Postgres has as default users postgres and root, and both can connect to the virtual machine with SSL. If you want to restrict access to the virtual machine for certain users or a group of users, here is some advice to follow:
1. In order to restrict SSL connection only to the members of the group vFabric (the user postgres is a member of this group by default), add this line at in /etc/ssh/sshd_config.
Modern companies and IT organizations have many applications, both internal and customer facing. With these many applications your users are faced with the challenge of not only managing multiple sets of credentials, but are also forced to login to each and every individual application separately. This creates a bad experience for your users.
To improve user experience, IT created a concept called Single Sign On (SSO). The idea was users could sign on once, and the SSO software would automatically authenticate them for all their applications. This not only helped the user experience, but also helped IT by cutting down on the number of ‘forgot password’ tickets opened and when users left the organization, it made de-authenticating them really easy. The idea is great, but in practice it frequently stopped short at authentication. Continue reading →
Yet, the pace of information technology often forces IT executives to do that.
In today’s world, mainframe-to-cloud decisions need solid thinking or we risk a technology tornado. This article outlines some key lessons learned at the front-line of IT decision-making.
As previously discussed, it’s possible to “modernize” mainframe legacy applications to the cloud. You can get there with little to no modification by using a “lift-and-shift” strategy. Several of my clients have taken this approach to quickly satisfy a “cloud mandate”. The results have been less than desirable:
Without the use of pooled resources, the applications do not scale well.
Timely user provisioning and access from any device is still a challenge because the apps do not provide on-demand, ubiquitous access.
In addition, utility-based pricing/costing is performed manually, with little accuracy to the realities of actual usage.
Most importantly, the applications continue to have monolithic, stove-piped architectures, which are difficult and expensive to maintain and enhance.
These “cloud” applications are more like funnel cloud apps or tornoado apps—waiting to cause IT organizations extreme havoc. Assuming you want to avoid funnel clouds and IT tornadoes, consider applying the following five application architecture and design principles indicative of a true cloud application: Continue reading →
Disabled SSL/TLS Compression. OpenSSL compression is now disabled by default for protection against the CRIME exploit vector. The mod_ssl “SSLCompression on” configuration option is added to allow the administrator to re-enable compression. See Vulnerability Summary for CVE-2012-4929. Continue reading →
In a guest post today, David Klee, a solutions architect from House of Brick Technologies shares with us some of the top data disasters in recent IT, and one way he sees to avoid it:
What good is a security camera in the dark?
It’s not any good at all.
Without light (infra-red or otherwise), a security camera does nothing to help prevent or record theft, and the same goes for “Shadow IT.” When we don’t have data in the light and under surveillance, our ability to watch over it is drastically impaired.
Chief Security Officers and CIOs know that somewhere in their organization, a well-intentioned developer or business person is moving valuable data into the shadows by putting it in the cloud. This scares the “stuff” out of security minded executives because 2012 was another wild year of data (in)security around the world. How secure is your data? Do you know who has access to your sensitive data or where each and every copy of your data resides? Do you have a list of all the places corporate data lives in the cloud? If you don’t know, you are in the shadows. Continue reading →
RSA is in the business of stopping banks and their customers from being robbed (among other things). Their technology has protected people, businesses, and financial institutions from online fraud for almost 20 years. Their Adaptive Authentication solution is deployed at over 8000 companies, used by over 200 million people, and has protected over 20 billion transactions to date. To jump on the “everything as a service” bandwagon, Adaptive Authentication is literally embarking on a project to “Stop Bank Robbers as a Service.”
Virtualization continues to be one of the top priorities for CIOs. As the share of virtualized workloads approaches 60%, the enterprise is looking at database and big data workloads as the next target. Their goal is to realize the virtualization benefits with the plethora of relational database sprawling in their data centers. With the increasing popularity of analytic workloads on Hadoop, virtualization presents a fast and efficient way to get started with existing infrastructure, and scale the data dynamically as needed.
VMware’s vFabric Data Director 2.5 now extends the benefits of virtualization to both traditional relational databases like Oracle, SQL Server and Postgres as well as Big Data, multi-node data solutions like Hadoop. SQL Server and Oracle represent the majority of databases in enterprises, and, Hadoop is the one of the fastest growing data technologies in the enterprise.
vFabric Data Director enables the most common databases found in the enterprise to be delivered as a service with the agility of public cloud and enterprise-grade security and control.
The key new features in vFabric Data Director 2.5 are:
Support for SQL Server – Currently supported versions of SQL Server are 2008 R2 and 2012.
Support for Apache Hadoop 1.0-based distributions: Apache Hadoop 1.0, Cloudera CDH3, Greenplum HD 1.1, 1.2 and Hortonworks HDP-1. Data Director leverages VMware’s open source Project Serengeti to deliver this capability.
Streamlined Data Director Setup – Complete setup in in less than an hour
One-click template creation for Oracle and SQL Server through ISO based database and OS installation
Oracle database ingestion enhancements – Now includes Point In Time Refresh (PITR)
Data Director’s self-provisioning enables a whole new level of operational efficiencies that greatly accelerates application development. With this new release, Data Director now delivers these efficiencies in a heterogeneous database environment.
With multiple Tomcat instances, each runs in its own JVM, with its own configuration, and can be started or stopped independently, while still running against the same core binary. There are a variety of reasons to do this in practice. For example:
Simplify updates by separating instance specific data like web applications from the core Tomcat software.
Maintain central control (and restricted permissions) on core Tomcat software, while allowing Tomcat instances to run as individual users without root permissions.
Isolate web applications to a particular Tomcat instance for protection from faults in other applications.
Permit application-specific performance monitoring (and usage billing) by having each application in its own Tomcat instance.
Configure the Java Virtual Machine specifically for the needs of the application(s) running on that Tomcat instance.
Configuring Tomcat such that a single binary runtime directory supports multiple independent instances is a simple matter of creating the correct directory hierarchies and setting a couple of environment variables. vFabric tc Server automates these tasks, but uses the same underlying mechanism as Tomcat. Given these basic facts, it’s easy to adopt a tc Server best practice for use with Tomcat. Continue reading →
What if you could provision a highly available, compliant database in one click? For many, this sounds impossible…particularly behind the firewall. Yet, it is possible today because database management has changed.
The change has been driven by years, perhaps decades, of unmet needs. For example, we’ve all heard these types of comments made inside our respective companies:
“Could we have a temporary copy of the database to use for a few days?”
“We can finish faster doing it ourselves with a PC under someone’s desk.”
“Didn’t we just buy a bunch of new database licenses?”
“I have to test this with production data.”
“It would be crazy to put two databases on the same server.