DBcloudbin flipped over: nibdoulocbd project

DBcloudbin flipped over: nibdoulocbd project

For a few years now, everyone has been talking about migrating our data and applications to the cloud, and it is something normal, I would say even natural. Once the stigma of security has been saved, and the skepticism on some companies to remove their data or applications from those bunkers called datacenters, the reality is that the benefits offered by the cloud are difficult to overcome in a local environment. Application re-engineering may be required, however (and this is where DBcloudbin can help, as we will see later).

Cost savings is one of the main benefits, savings in physical infrastructure (servers, network elements, …), savings in management and maintenance of the same infrastructure and savings at the software level (for example, avoiding the acquisition of licenses), are some of the most attractive benefits of moving to the cloud. But its benefits are not exclusively monetary, our applications, data, servers, etc., will also benefit from a complex set of mechanisms and flows that will make them more scalable, secure and with high availability.

From Tecknolab, and through our DBcloudbin solution, we help our customers to take this step, migrating in a simple, secure and transparent way binary data (documents, images, …) from their databases to the main Cloud providers. Reducing your databases and therefore the cost of infrastructure, management and protection of them.

But what if, for example, we already have data in the cloud in an S3 repository or similar and we want to exploit it from an application outside of it?. The obvious and simple answer would be: make the necessary modifications in your application, to be able to interact with the cloud storage systems, using the protocols and APIs that the same providers provide (Amazon S3, Google Cloud, Azure…). Many times this process of reengineering the data access layer of an application is not easy at all, even sometimes the complexity of the application itself, as in legacy applications, makes it unfeasible. In addition, this type of process usually entails certain costs that are difficult to bear for a project of these characteristics.

Therefore, discarding this point, and based on the principle of simplicity, what is likely is that said application already uses a database, and that, if the application already supports binary data processing, its data is capable of supporting such functionality. This is where the nibdoulocbd (DBcloudbin flipped over) project comes in.

The nibdoulocbd project

DBcloudbin reengeniering

The nibdoulocbd project is based on a “reverse engineering” of the cloud data, after which and using DBcloudbin as a solution, we will be able to make our application able to have visibility of this data, without a custom cloud integration; maybe with slight modifications in case it is not prepared for the treatment of binary data (which will always be less expensive than having to fully implement an S3 connector, for example). This process will not move the data from the cloud (a priori it is not what we are interested in, although we could even do it through DBcloudbin if necessary), but it will make it accessible from our application in a transparent way.

An example from the real world.

Let’s imagine that we have in an S3 bucket a series of sales order documents in PDF format, stored with the identifier of the order to which they belong and that we want to be accessible as an attachment to that order from our application, that to this day, it does not handle binary data in our database. Well, after installing DBcloudbin in our system, adding small modifications for the handling of a binary field from our application and by inserting in our database the “links” to the document in S3 linked to each sales order, our application will be able to access said data transparently and without altering the size of our database.

In addition, once DBcloudbin is installed in our systems, it will not only allow us to access existing data in the cloud, but it will also provide us with the tools of the solution working in “non-inverse” mode, allowing us to archive or restore data to and from the cloud that are stored in the database of our application.

DBcloudbin reeingeniering data application

In a future post, we will provide a hands-on implementation description of this example. Meanwhile, for more details of the solution, visit https://www.dbcloudbin.com/solution

DBcloudbin for dummies (part 1)

In this blog series we’ll try to explain DBcloudbin for our non-technical audience.DBcloudbin for dummies

We believe it is a great product and we want to open it and allow a reasonable understanding for those that are not familiar with the data management and database concepts, helping with this blog. In this first post, we will deal with the basics, structured in 5 topics. In the next one, we will introduce the basic problem of many applications that DBcloudbin comes to solve. Let’s start…

1.- How a typical enterprise application works.

For those used to work with applications running in their phone or laptop, enterprise applications are substantially different in most cases. They usually run in a centralized infrastructure where an application server is executing the application intelligence and the users are connecting to it remotely, in many cases through an web interface (so accessing a ‘well-known’ web page). You enter the URL, log in with your credentials and start using it for your daily duties. This application infrastructure behind the scenes can be just one computer with all installed in, or dozens of servers with different roles and external storage infrastructure and communications networks for providing a complex IT service.

2.- How data is stored for an application working properly.

Any non-trivial application deals with data and is the result of taking some data as input, executing a defined process with it, and generating some output data. This data has to be stored somewhere. There are multiple options but the most common situation is having a database, that is in fact an subsidiary application that provides this capability.The database is able to provide some very interesting services on top of safely storing data as is providing a way to structure and query that information, validate its formal representation (e.g. make sure that a customer record is only stored if it contains an attribute with “First Name” and “Last Name”), handle the simultaneous access of several applications to the same data in a unambiguous way, among many others.

The most used database type is what we call relational database where data is structured in tables as in an Excel spreadsheet. Databases provide a way for applications to read and write data; since those operations need high flexibility it is done providing a formal language; this way, an application can ‘talk’ to the database an tell exactly what it needs.

The most used language is called SQL. In computers, the language normally has more strict rules than in human language, both syntactically and semantically. If an application request the “contracts closed on last Monday” our application should have beforehand instructed our database that there is something called “contract”, they have an attribute that can hold the value “closed” and other attribute that indicates when the operation was performed; otherwise, the application receives an error.

3.- How is data protected in a database.

Any database software (remember a database is just another application) will physically store the information in media based on their own proprietary criteria. Nobody else than the database software needs to know it. However, we need a way to store a copy of that data anywhere else to be able to restore the database content in the case of any disaster. This will require the integration of the database software with another software (provided by the database manufacturer or not) able to extract a copy of that data for protection in an independent media (this is called backup software). When the database is large, this process takes time. Restoring the database in the case of problems, will take also significant time, with the additional trouble that in that case we will not being able to have our application running (so, no service provided to our users).

4.- What are the types of storage.

Keeping it simple, we have three basic types of storage from an access perspective (from a physical perspective there are other classifications but it is less relevant for our purpose):

  • Block storage: This is the older and most common type. In this case a repository of data is somehow assigned to a computer and the operating system of this computer will create the physical structure to handle raw data in this repository (or ‘disk’). This is what happens with the disk that our laptop has installed for operating and storing our documents. In large enterprise environments those disks (or many of them) are not physically inserted in a computer, but assigned to a computer from a centralized pool of storage accessed through a special storage network. This is the most common way of serving storage to the server where the database application is running; so the database application consumes this type of storage for saving the data that its client applications ask to be persisted.
  • Networked File storage: In this case, the storage tier provides an additional level of service and is able to provide a filesystem, so a higher level way of structuring our files in folders, that can be accessed from several servers. It is commonly used for providing a file storage service to users where they can save their documents and other stuff. It uses protocols with similar primitives but differences for Windows and Linux systems that generate some interoperability challenges.
  • Object storage: Is the newest type of storage where we can store content (objects) in a common namespace where objects are identified by its name. It is based on a significantly different approach in the sense that is not the operating system of the server that deals with the storage, but the applications running on top of the operating system, dialoguing directly with the storage gear using a defined protocol. The most common and becoming a de-facto standard is the S3 protocol, implemented by the S3 service (Simple-Storage-Service) provided by Amazon Web Services. Now, there are many different storage manufacturers providing a similar service, most of them implementing the same protocol for interoperability.
5.- How is all this related with the Cloud.

Cloud computing is a way to consume computing, storage and networking services without having to deal with the actual infrastructure. In general, the Cloud provider can deliver IT services at a many levels of abstraction (pure infrastructure, application platforms, end-user ready to consume software, …). Moving to the Cloud is in general a complex task for non-trivial workloads when we need to provide a business continuity to our IT services as it is the normal scenario in enterprises. Large databases that support critical LoB (line-of-business) operations are probably the toughest scenario since any large datasets have specific challenges starting by the nature that transferring large amounts of data requires a significant amount of time. Information is also a key asset for any company so it opens additional aspects as security, privacy, protection, ….

If you want learn more things, there are more articles talking about this topic. Check HERE

DBcloudbin for dummies (part 2)

In our previous post of “DBcloudbin for dummies“, we described the basics of how an enterprise application infrastructure is architected. Now, it is time to go slightly deeper on how DBcloudbin helps to solve some of the challenges.

DBcloudbin for dummies, tranparency layerLet’s talk about costs“. IT infrastructure is expensive regardless of whether it is deployed on-premises or ‘rented’ as a Cloud service. Database infrastructure is at the top level of expenditures, due to its special criticality, requirements of performance and some level of lack of competency (there is a tendency of packaging all the database software and hardware in an appliance provided by the same manufacturer in a market where the alternatives are very few; Oracle controls the vast majority of the high and mid Enterprise market and moving from one database technology to other is very challenging). When we are talking about critical infrastructure, it is required to be at least replicated, so let’s multiply everything by two (or more, since replicating data sums up additional costs).

Designing and developing applications is also expensive, requires specialized human resources with relevant wages and those scarce resources are normally invested on adding new functionality in the company LoB applications for improving our business. Those applications are architected to store data in databases and use the common database interface language (SQL) for accessing it. If this data is simple data (a string with your name and personal details for instance) it consumes not that much space. But it that data includes a high-resolution picture of yourself, it may consume as much space as thousands of ‘simple data’ records. Why store it at the database? Well, it is simpler and easier from an software engineering perspective. There may be many technical reasons for it. So, many applications are designed that way, storing what we call BLOBs (Binary Large Objects) at the database.

When the application keeps collecting data in its normal use, database tends to grow. After some time, the database size can be several terabytes. If we analyze the data, most of this size is occupied by those BLOBs (and in many cases the data is historical, not frequently accessed, but we need to keep it, and protect it). This generates high expenditures in infrastructure, backup and maintenance. However, fixing the problem is not that easy. We cannot just delete that data, of course. We may try to move it somewhere else, but we would break our application logic. The application SQL sentences would not be able to access that data and we would need to re-engineer the application. Can we? In many cases we cannot (cost, resources, skills, risk, ….)

This is where DBcloudbin comes to solve it! We automatically inspect the application data model in the database and generate what we call a ‘transparency layer’ (it is a data virtualization layer). This is a thin layer in the database that has the interesting property that is able to solve the same SQL queries that the original data model of our application, with the same semantic. So, if we reconfigure our application for using the transparency layer (this is a very basic application setting change), it will work the same way as before.

The important difference is that after this, we can freely move that BLOB data to an external object storage (either Cloud or on-premise) and the application will still be able to access it exactly as before, using the same SQL query. So, the data is out, no longer exploding our database, but from a business user perspective there is no change at all. Same application, same access, some operations. In addition, we handle the extracted data in a way that it can be replicated, versioned and protected with no need for executing recurring backup jobs on the externalized data. So, since our database will shrink, our backups will do as well. Smaller backups is much less cost (in backup infrastructure) and time for executing it. Even more important, if we have a crash in our database, restoring is also faster.

This are the basics of the solution. You are probably now ready for going to the Solution overview for more detailed description, requesting a demo or trying the solution by yourself for free.