Pages - Menu

Scaling up a server

Wednesday, July 24, 2013

We have two scaling options: vertical and horizontal scaling. Vertical scaling relates to adding more CPUs to a machine. To better utilize the server hardware we can add more WebLogic instances to the machine that could lead to increased application throughput. To determine if this is indeed the case we need to benchmark. Benchmarking for scalability is about measuring resource utilization. Good scalability means that service levels can be maintained while the workload is increased. If an application does not scale well, it is not fully utilizing the hardware. Consequently, throughput will degrade. Ideally, a linear load increase should lead to a linear degradation in service levels and performance. Linear scalabilty can be approached when so-called share nothing clusters are used. The nodes provide the same functionality and know nothing about other nodes in the cluster (no HTTP session replication). In this case, the computing ability of the cluster increases almost linearly as more nodes are added to the cluster, if the back-end information systems, such as a database, are powerful enough.

Applications that ‘share nothing’ are usually sharing state through the database. The application-tier can scale as far as when the database becomes a bottleneck. In general, relying on a single shared resource will eventually cause contention for that resource and thus limit the scalability. Caching is a good resolution. When we cache data at the application-tier we avoid calls to the database (and also avoid relational data to object data conversions). When using a cluster of server instances we need to maintain multiple caches; this is not a problem for read-only data but for read/write data it is. Caching solutions, such as Coherence, provide different kind of caching, i.e., replicated and partitioned. Replicated does not scale well when cache writes are involved as the data needs to be replicated across all the nodes in the grid. A partioned cache, on the other hand, scales very well when cache writes are involved as data ownership is spread throughout the cluster (the system automatically rebalances the data when the number of nodes in the grid changes – we do not need to decide on how to partition the data, it comes out of the box). Another plus is that access to the cache means at most one network trip, this in order to maintain linear scalability. An optimization on read-access can be made when data can be obtained locally (sticky access) in this case a hybrid solution such as the near cache can be applied. Note that when Coherence*Web is used to cache HTTP session objects a near cache data structure is used. More information on Coherence*Web can be found in the post Setting-up a WebLogic Cluster that uses Coherence.

Database caching patterns are cache-aside, read/write-through or write-behind. The caching pattern cache-aside is discussed in the post Hibernate and Coherence. The caching patterns read/write-through or write-behind are discussed in the post Coherence and Hibernate: Decoupling the Database.

To scale the processing, data grids allow for a targeted execution of processing in the nodes, i.e., the map/reduce pattern (used, for example, in Google’s bigtable or Apache’s hadoop). The core of the map/reduce pattern are the following steps:

    Map step – a master node takes some input (problem), partitions it into smaller sub-problems and distributes those to worker nodes. Note that a worker node in turn can do the same.
    Reduce step – the master takes all the answers and combines them to get the output.

Sort of a divide and conquer algorithm, with the key difference that the map/reduce algorithm handles the data as key-value pairs.

Horizontal scalling relates to adding more machines to the environment, which gives a failover capability that we cannot get with vertical scaling. A good approach is to combine both scaling techniques to obtain better CPU utilization and failover capability. The post Setting-up a WebLogic Cluster that spans Multiple Machines shows an example, as the title suggests, of how to set-up a cluster that spans multiple machines.

1 comment

 

Archives

Blogger news

Blogroll

Most Reading