抽象的な

Implementation of DHT on Load Rebalancing In Cloud Computing

G.Naveen, J.Praveen Chander

Distributed file systems are key building blocks for cloud computing applications based on the Map Reduce programming. In such file systems, nodes simultaneously serve computing and storage functions. Files can also be dynamically created, deleted, and appended. This results in load imbalance in a distributed file system. The file chunks are not distributed as uniformly as possible among the nodes. In existing we use round robin algorithm, due to this algorithm the load balance happens to the server good with some extend. All the servers should response in same time duration to complete the task.If any one of the server makes delay in response for the given task which impact the CPU computing resource. As a result it can be a bottleneck resource and a single point of failure. Our target is to optimize the computing resource (server), maximum throughput of servers, to avoid overload or crash of computing servers and to increase the response time. In our system DHT algorithm in such a way to make optimize computing resource and increase response time. In our model we divided the total bytes in to number of active servers and fed the same accordingly. This will make the effective utilization of the servers. And also we are divided the each file in to number of chunks for easy processing which increase the response time as well as easy error re-transmission if any data was dropped during transmission. Additionally, we aim to reduce network traffic or movement cost caused by rebalancing the loads of nodes as much as possible to maximize the network bandwidth available to normal applications.

免責事項: この要約は人工知能ツールを使用して翻訳されており、まだレビューまたは確認されていません

インデックス付き

Academic Keys
ResearchBible
CiteFactor
Cosmos IF
RefSeek
Hamdard University
World Catalogue of Scientific Journals
Scholarsteer
International Innovative Journal Impact Factor (IIJIF)
International Institute of Organised Research (I2OR)
Cosmos

もっと見る