Tuesday, February 26, 2013

Data Modeling for NoSQL Data Stores


  A data model is the model through which we perceive and manipulate our data. For people using a database, the data model describes how we interact with the data in the database. This is distinct from a storage model, which describes how the database stores and manipulates the data internally. In an ideal world, we should be ignorant of the storage model, but in practice we need at least some inkling of it—primarily to achieve decent performance.

The relational model takes the information that we want to store and divides it into tuples (rows). A tuple is a limited data structure: It captures a set of values, so you cannot nest one tuple within another to get nested records, nor can you put a list of values or tuples within another. This simplicity underpins the relational model—it allows us to think of all operations as operating on and returning tuples.
          Aggregate orientation takes a different approach. It recognizes that often, you want to operate on data in units that have a more complex structure than a set of tuples. It can be handy to think in terms of a complex record that allows lists and other record structures to be nested inside it. An aggregate is a collection of related objects that we wish to treat as a unit. In particular, it is a unit for data manipulation and management of consistency. Typically, we like to update aggregates with atomic operations and communicate with our data storage in terms of aggregates. This definition matches really well with how key-value, document, and column-family databases work. Dealing in aggregates makes it much easier for these databases to handle operating on a cluster, since the aggregate makes a natural unit for replication and sharding. Aggregates are also often easier for application programmers to work with, since they often manipulate data through aggregate structures.
           Example: Let’s assume we have to build an e-commerce website; We can use this example to model the data using a relation data store as well as NoSQL data stores and talk about their pros and cons.
As we’re good relational soldiers, everything is properly normalized, so that no data is repeated in multiple tables. We also have referential integrity.So we will need a bunch of tables with relations via foreign keys. So we will have Customer, Orders, Product, BillingAddress, OrdeItem, Address, OrderPayment etc.,
Now let’s see how this model might look when we think in more aggregate-oriented terms.



// in customers

{ "id":1, "name":"Martin", "billingAddress":[{"city":"Chicago"}] }
// in orders { "id":99, "customerId":1, "orderItems":[   {   "productId":27,   "price": 32.45,   "productName": "NoSQL Distilled"     }   ], "shippingAddress":[{"city":"Chicago"}] "orderPayment":[   {     "ccinfo":"1000-1000-1000-1000",     "txnId":"abelif879rft",     "billingAddress": {"city": "Chicago"}   }   ], }
"customer": {
"id": 1,
"name": "Martin",
"billingAddress": [{"city": "Chicago"}],
"orders": [
  {
    "id":99,
    "customerId":1,
    "orderItems":[
    {
    "productId":27,
    "price": 32.45,
    "productName": "Data Modelling"
    }
  ],
  "shippingAddress":[{"city":"Chicago"}]
  "orderPayment":[
    {
    "ccinfo":"1000-1000-1000-1000",
    "txnId":"abelif879rft",
    "billingAddress": {"city": "Chicago"}
    }],
  }]
}
}
x
In this model, we have two main aggregates: customer and order. We use composition to show how data fits into the aggregation structure. The customer contains a list of billing addresses; the order contains a list of order items, a shipping address, and payments. The payment itself contains a billing address for that payment.
A single logical address record appears three times in the example data, but instead of using IDs it’s treated as a value and copied each time. This fits the domain where we would not want the shipping address, nor is the payment billing address, to change. In a relational database, we would ensure that the address rows aren’t updated for this case, making a new row instead. With aggregates, we can copy the whole address structure into the aggregate as we need to.
The link between the customer and the order isn’t within either aggregate—it’s a relationship between aggregates. Similarly, the link from an order item would cross into a separate aggregate structure for products. The product name is as part of the order item here—this kind of denormalization is similar to the tradeoffs with relational databases, but is more common with aggregates because we want to minimize the number of aggregates we access during a data interaction.
The important thing to notice here isn’t the particular way we’ve drawn the aggregate boundary so much as the fact that we have to think about accessing that data—and make that part of our thinking when developing the application data model. Indeed we could draw our aggregate boundaries differently, putting all the orders for a customer into the customer aggregate.

Using the above data model, an example Customer and Order would look like this:
// in customers
{
Like most things in modeling, there’s no universal answer for how to draw your aggregate boundaries. It depends entirely on how you tend to manipulate your data. If you tend to access a customer together with all of that customer’s orders at once, then you would prefer a single aggregate. However, if you tend to focus on accessing a single order at a time, then you should prefer having separate aggregates for each order. Naturally, this is very context-specific; some applications will prefer one or the other.

The clinching reason for aggregate orientation is that it helps greatly with running on a cluster, which is the killer argument for the rise of NoSQL. If we’re running on a cluster, we need to minimize how many nodes we need to query when we are gathering data. By explicitly including aggregates, we give the database important information about which bits of data will be manipulated together, and thus should live on the same node.

Key-Value and Document Stores - Data Models
Key-value and document databases are strongly aggregate-oriented. The two models differ in that in a key-value database, the aggregate is opaque to the database—just some big blob of mostly meaningless bits. In contrast, a document database is able to see a structure in the aggregate. The advantage of opacity is that we can store whatever we like in the aggregate. The database may impose some general size limit, but other than that we have complete freedom. A document database imposes limits on what we can place in it, defining allowable structures and types. In return, however, we get more flexibility in access.
With a key-value store, we can only access an aggregate by lookup based on its key. With a document database, we can submit queries to the database based on the fields in the aggregate, we can retrieve part of the aggregate rather than the whole thing, and database can create indexes based on the contents of the aggregate.
  
When modeling data aggregates we need to consider how the data is going to be read as well as what are the side effects on data related to those aggregates.

Let’s start with the model where all the data for the customer is embedded using a key-value store



In this scenario, the application can read the customer’s information and all the related data by using the key. If the requirements are to read the orders or the products sold in each order, the whole object has to be read and then parsed on the client side to build the results. When references are needed, we could switch to document stores and then query inside the documents, or even change the data for the key-value store to split the value object into Customer and Order objects and then maintain these objects’ references to each other.

With the references we can now find the orders independently from the Customer, and with the orderId reference in the Customer we can find all Orders for the Customer. Using aggregates this way allows for read optimization, but we have to push the orderId reference into Customer every time with a new Order.

Aggregates can also be used to obtain analytics; for example, an aggregate update may fill in information on which Orders have a given Product in them. This denormalization of the data allows for fast access to the data we are interested in and is the basis for Real Time BI or Real Time Analytics where enterprises don’t have to rely on end-of-the-day batch runs to populate data warehouse tables and generate analytics; now they can fill in this type of data, for multiple types of requirements, when the order is placed by the customer.

And finally something to smile after a long read :-)


References:

1)http://www.amazon.com/Seven-Databases-Weeks-Modern-Movement/dp/1934356921/ref=sr_1_1?ie=UTF8&qid=1361896201&sr=8-1&keywords=seven+databases+in+seven+weeks
2)http://www.amazon.com/NoSQL-Distilled-Emerging-Polyglot-Persistence/dp/0321826620/ref=sr_1_1?s=books&ie=UTF8&qid=1361896233&sr=1-1&keywords=nosql+distilled
3)http://highlyscalable.wordpress.com/2012/03/01/nosql-data-modeling-techniques/

Friday, January 4, 2013

Back to the Future: Adaptive Load Management for Internet Services




First blog one of the series " Back To Future", Here is summary of one interesting design approach i came across for dynamically managing load of Internet Services(I too don't like the term Web Services :-) ).

Overload management is a critical design goal for Internet-based systems and services. Most service designs take overload into account, by treating the problem as one of capacity planning rather than designing the service to behave gracefully under extreme load. Feedback-driven control, rather than static resource limits, should be the basis for detecting and controlling overload. Adaptive admission controllers for meeting performance targets, such as 90th percentile response time can be used.

Most operating systems adhere to the principle of resource virtualization to simplify application development. Unfortunately, this approach makes it difficult for applications to be aware of, or adapt to,real resource limitations.The programming models used for developing Services generally fail to express resource constraints in a meaningful way.For example, Java RMI calls can throw a generic exception due to any type of failure, but there is typically little that an application can do when this occurs: should the application fail, retry the operation, or invoke an alternate interface? A service consisting of several independent components communicating through a common protocol such as SOAP/HTTP. When one component becomes a resource bottleneck, the only overload management technique generally used is for the service to refuse additional TCP connections. While effectively shielding that service from load, other participants face very long connection delays (e.g., due to TCP’s exponential SYN re-transmit back-off behavior), causing the bottleneck to propagate through the entire distributed application.

An interesting framework for building Internet services that are inherently robust to load, using two simple techniques:
dynamic resource management and fine-grained admission control on a software architecture called the staged event-driven architecture (or SEDA), which decomposes an Internet service into a network of event-driven stages connected with explicit event queues. Load management in SEDA is accomplished by introducing a feedback loop that observes the behavior and performance of each stage, and applies resource control and admission control to effectively manage overload.

Dynamic Overload Management
The classic approach to resource management in Internet services is static resource containment, in which a priori resource limits are imposed on an application or service to avoid over commitment. Various kinds of resource limits are used: bounding the number of processes or threads within a server is a common technique, as is limiting the number of client socket connections to the service. Both of these approaches have the fundamental problem that it is generally not possible to know what the ideal resource limits should be. Setting the limit too low under utilizes resources, while setting the limit too high can lead to over saturation and serious performance degradation under overload. Refusing to accept additional TCP connections under heavy load is inadvisable as it causes clients to re-transmit the initial SYN packet with exponential backoff, leading to very long response times. This approach is also too coarse grained in the sense that even a single client can consume all of the resources in the system; imposing process or connection limits does not solve the more general resource management issue.

Another style of resource containment is that typified by a variety of real-time and multimedia systems. In this approach,resource limits are typically expressed as reservations or shares,
as in “process P gets X percent of the CPU.” In this model,the operating system must be careful to account for and control the resource usage of each process. Applications are given
a set of resource guarantees, and the system prevents guarantees from being exceeded through scheduling or forced termination.Reservation- and share-based resource limits techniques work well for real-time and multimedia applications,which have relatively static resource demands that can be expressed as straightforward, fixed limits. coarse-grained entities. In an Internet service, the focus is on individual requests, for which it is permissible (and often desirable) to meet statistical performance targets over a large number of requests, rather than to enforce guarantees for particular requests.

Desirable properties for overload management:

Exposure of the request stream: Event queues make the request stream within the service explicit, allowing the application (and the underlying runtime environment) to observe and control the performance of the system, e.g.,through reordering or filtering of requests.

Focused, application-specific admission control: A stage that consumes many resources can be conditioned to load by throttling the rate at which events are admitted to just that stage, rather than refusing all new requests in a generic fashion.

In SEDA, each stage is subject to dynamic resource control,which attempts to keep each stage within its ideal operating regime by tuning parameters of the stage’s operation. For example,
one such controller adjusts the number of threads executing within each stage based on an observation of the stage’s offered load (incoming queue length) and performance (throughput).
This approach frees the application programmer from manually setting “knobs” that can have a serious impact on performance.

Each stage has an associated admission controller that guards access to the event queue for that stage.

When the admission controller rejects a request, the corresponding en-queue operation fails, indicating to the originating stage that there is a bottleneck in the system. Applications are
therefore responsible for reacting to these “overload signals” in some way. The simplest response is to block until the downstream stage can accept the request, which leads to back-pressure
within the graph of stages. Another response is to drop the request,possibly sending an error message to the client or using the HTTP redirect mechanism to bounce the request to another
server.

The key is that the architecture is explicit about signaling overload conditions and allows the application to participate in load management decisions.

Response time controller:

 The controller consists of several components. A monitor measures response times for each request passing through a stage. The measured 90th percentile response time over some interval is passed to the controller which adjusts the admission control parameters based on the administrator supplied response-time target. The basic overload control algorithm makes use of additive-increase/multiplicative decrease tuning of the token bucket rate based on the current observation of the 90th percentile response time processed. This implies that the overload controller will not run when the token bucket rate is low; the algorithm therefore “times out” and performs a recalculation of the 90th percentile response time after a certain interval. When the 90th percentile response time estimate is above a high-water mark (e.g., 10% above the administrator-specified target), the token bucket rate is reduced by a multiplicative factor (e.g., dividing the admission rate by 2).When the estimate is below a low-water mark, the token bucket rate is increased by a small additive factor.

Rather than rejecting requests,  applications may degrade the quality of delivered service in order to admit a larger number of requests given a response-time target. If service degradation is ineffective (say, because the load is too high to support even the lowest quality setting),the stage can re-enable admission control to cause requests to be rejected.

Likewise, by prioritizing requests from certain users over others,a application can implement various policies related to class-based service level agreements. A common example is
to give better service to requests from “gold” customers (who might pay more money for the offered service or the one who has nore items in the shopping cart ). Another controller that can be designed could be the one which more aggressively rejects lower-class requests than higher-class requests when a stage is overloaded.

Design and tuning of efficient control mechanisms: Introducing feedback as a mechanism for overload control raises a number of questions. For example, how should controller parameters be
tuned? We have relied mainly on a heuristic approach to controller design, though more formal, control-theoretic techniques are possible . Control theory provides a valuable framework
for designing and evaluating feedback-driven systems, though many of the traditional techniques rely upon good mathematical models of system behavior, which are often unavailable for complex
software systems. The interaction between multiple levels of control in the system — for example, the interplay between queue admission control and tuning per-stage thread pool sizes.

Useful Links
SEDA: An Architecture for Well-Conditioned, Scalable Internet Services http://www.eecs.harvard.edu/~mdw/talks/seda-sosp01-talk.pdf
http://www.eecs.harvard.edu/~mdw/proj/seda/
http://en.wikipedia.org/wiki/Staged_event-driven_architecture